Nov 8 00:07:23.886903 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:07:23.886947 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Nov 7 22:41:39 -00 2025 Nov 8 00:07:23.886970 kernel: KASLR enabled Nov 8 00:07:23.886984 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 8 00:07:23.886999 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390c1018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b43d18 Nov 8 00:07:23.887012 kernel: random: crng init done Nov 8 00:07:23.887030 kernel: ACPI: Early table checksum verification disabled Nov 8 00:07:23.887044 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 8 00:07:23.887060 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:07:23.887078 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887093 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887107 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887122 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887137 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887155 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887174 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887190 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887205 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:07:23.887221 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 8 00:07:23.887236 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 8 00:07:23.887252 kernel: NUMA: Failed to initialise from firmware Nov 8 00:07:23.887268 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 8 00:07:23.887283 kernel: NUMA: NODE_DATA [mem 0x13966f800-0x139674fff] Nov 8 00:07:23.887298 kernel: Zone ranges: Nov 8 00:07:23.887314 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 8 00:07:23.887332 kernel: DMA32 empty Nov 8 00:07:23.887347 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 8 00:07:23.887362 kernel: Movable zone start for each node Nov 8 00:07:23.887377 kernel: Early memory node ranges Nov 8 00:07:23.887393 kernel: node 0: [mem 0x0000000040000000-0x000000013676ffff] Nov 8 00:07:23.887408 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 8 00:07:23.887424 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 8 00:07:23.887439 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 8 00:07:23.887454 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 8 00:07:23.887470 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 8 00:07:23.887485 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 8 00:07:23.887500 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 8 00:07:23.887519 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 8 00:07:23.887534 kernel: psci: probing for conduit method from ACPI. Nov 8 00:07:23.887571 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:07:23.888657 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:07:23.888677 kernel: psci: Trusted OS migration not required Nov 8 00:07:23.888694 kernel: psci: SMC Calling Convention v1.1 Nov 8 00:07:23.888715 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 8 00:07:23.888732 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 8 00:07:23.888751 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 8 00:07:23.888768 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 8 00:07:23.888784 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:07:23.888801 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:07:23.888817 kernel: CPU features: detected: Hardware dirty bit management Nov 8 00:07:23.888834 kernel: CPU features: detected: Spectre-v4 Nov 8 00:07:23.888850 kernel: CPU features: detected: Spectre-BHB Nov 8 00:07:23.888867 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:07:23.888887 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:07:23.888903 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:07:23.888920 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:07:23.888936 kernel: alternatives: applying boot alternatives Nov 8 00:07:23.888956 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:07:23.888973 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:07:23.888990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:07:23.889007 kernel: Fallback order for Node 0: 0 Nov 8 00:07:23.889023 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1008000 Nov 8 00:07:23.889040 kernel: Policy zone: Normal Nov 8 00:07:23.889056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:07:23.889076 kernel: software IO TLB: area num 2. Nov 8 00:07:23.889093 kernel: software IO TLB: mapped [mem 0x00000000fbfff000-0x00000000fffff000] (64MB) Nov 8 00:07:23.889112 kernel: Memory: 3882808K/4096000K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 213192K reserved, 0K cma-reserved) Nov 8 00:07:23.889129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:07:23.889146 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:07:23.889163 kernel: rcu: RCU event tracing is enabled. Nov 8 00:07:23.889181 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:07:23.889197 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:07:23.889215 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:07:23.889233 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:07:23.889251 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:07:23.889268 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:07:23.889289 kernel: GICv3: 256 SPIs implemented Nov 8 00:07:23.889306 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:07:23.889322 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:07:23.889339 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 8 00:07:23.889356 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 8 00:07:23.889372 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 8 00:07:23.889389 kernel: ITS@0x0000000008080000: allocated 8192 Devices @1000c0000 (indirect, esz 8, psz 64K, shr 1) Nov 8 00:07:23.889406 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @1000d0000 (flat, esz 8, psz 64K, shr 1) Nov 8 00:07:23.889423 kernel: GICv3: using LPI property table @0x00000001000e0000 Nov 8 00:07:23.889440 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000001000f0000 Nov 8 00:07:23.889457 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:07:23.889476 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:07:23.889493 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:07:23.889510 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:07:23.889528 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:07:23.889559 kernel: Console: colour dummy device 80x25 Nov 8 00:07:23.890592 kernel: ACPI: Core revision 20230628 Nov 8 00:07:23.890640 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:07:23.890660 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:07:23.890679 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:07:23.890699 kernel: landlock: Up and running. Nov 8 00:07:23.890722 kernel: SELinux: Initializing. Nov 8 00:07:23.890740 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:07:23.890758 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:07:23.890776 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:07:23.890794 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:07:23.890811 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:07:23.890830 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:07:23.890847 kernel: Platform MSI: ITS@0x8080000 domain created Nov 8 00:07:23.890864 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 8 00:07:23.890884 kernel: Remapping and enabling EFI services. Nov 8 00:07:23.890901 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:07:23.890918 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:07:23.890936 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 8 00:07:23.890953 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100100000 Nov 8 00:07:23.890970 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:07:23.890987 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:07:23.891004 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:07:23.891021 kernel: SMP: Total of 2 processors activated. Nov 8 00:07:23.891041 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:07:23.891059 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:07:23.891076 kernel: CPU features: detected: Common not Private translations Nov 8 00:07:23.891105 kernel: CPU features: detected: CRC32 instructions Nov 8 00:07:23.891125 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 8 00:07:23.891144 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:07:23.891161 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:07:23.891179 kernel: CPU features: detected: Privileged Access Never Nov 8 00:07:23.891197 kernel: CPU features: detected: RAS Extension Support Nov 8 00:07:23.891219 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 8 00:07:23.891237 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:07:23.891255 kernel: alternatives: applying system-wide alternatives Nov 8 00:07:23.891272 kernel: devtmpfs: initialized Nov 8 00:07:23.891291 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:07:23.891309 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:07:23.891327 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:07:23.891345 kernel: SMBIOS 3.0.0 present. Nov 8 00:07:23.891367 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 8 00:07:23.891385 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:07:23.891403 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:07:23.891422 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:07:23.891440 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:07:23.891458 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:07:23.891476 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Nov 8 00:07:23.891506 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:07:23.891524 kernel: cpuidle: using governor menu Nov 8 00:07:23.891556 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:07:23.891576 kernel: ASID allocator initialised with 32768 entries Nov 8 00:07:23.891605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:07:23.892621 kernel: Serial: AMBA PL011 UART driver Nov 8 00:07:23.892645 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:07:23.892663 kernel: Modules: 0 pages in range for non-PLT usage Nov 8 00:07:23.892681 kernel: Modules: 509008 pages in range for PLT usage Nov 8 00:07:23.892699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:07:23.892717 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:07:23.892743 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:07:23.892761 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:07:23.892779 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:07:23.892798 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:07:23.892819 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:07:23.892838 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:07:23.892857 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:07:23.892876 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:07:23.892895 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:07:23.892917 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:07:23.892936 kernel: ACPI: Interpreter enabled Nov 8 00:07:23.892954 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:07:23.892972 kernel: ACPI: MCFG table detected, 1 entries Nov 8 00:07:23.892990 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:07:23.893008 kernel: printk: console [ttyAMA0] enabled Nov 8 00:07:23.893026 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:07:23.893305 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:07:23.893483 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:07:23.894751 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:07:23.894930 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 8 00:07:23.895087 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 8 00:07:23.895111 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 8 00:07:23.895130 kernel: PCI host bridge to bus 0000:00 Nov 8 00:07:23.896834 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 8 00:07:23.896999 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 8 00:07:23.897143 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 8 00:07:23.897284 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:07:23.897467 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 8 00:07:23.898860 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 Nov 8 00:07:23.899045 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x11289000-0x11289fff] Nov 8 00:07:23.899210 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000600000-0x8000603fff 64bit pref] Nov 8 00:07:23.899399 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.899629 kernel: pci 0000:00:02.0: reg 0x10: [mem 0x11288000-0x11288fff] Nov 8 00:07:23.899840 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.900048 kernel: pci 0000:00:02.1: reg 0x10: [mem 0x11287000-0x11287fff] Nov 8 00:07:23.900236 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.900402 kernel: pci 0000:00:02.2: reg 0x10: [mem 0x11286000-0x11286fff] Nov 8 00:07:23.901658 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.901900 kernel: pci 0000:00:02.3: reg 0x10: [mem 0x11285000-0x11285fff] Nov 8 00:07:23.902076 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.902236 kernel: pci 0000:00:02.4: reg 0x10: [mem 0x11284000-0x11284fff] Nov 8 00:07:23.902417 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.903798 kernel: pci 0000:00:02.5: reg 0x10: [mem 0x11283000-0x11283fff] Nov 8 00:07:23.905510 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.907881 kernel: pci 0000:00:02.6: reg 0x10: [mem 0x11282000-0x11282fff] Nov 8 00:07:23.908069 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.908264 kernel: pci 0000:00:02.7: reg 0x10: [mem 0x11281000-0x11281fff] Nov 8 00:07:23.908439 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 Nov 8 00:07:23.908683 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x11280000-0x11280fff] Nov 8 00:07:23.908882 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 Nov 8 00:07:23.909047 kernel: pci 0000:00:04.0: reg 0x10: [io 0x0000-0x0007] Nov 8 00:07:23.909227 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:07:23.909398 kernel: pci 0000:01:00.0: reg 0x14: [mem 0x11000000-0x11000fff] Nov 8 00:07:23.911699 kernel: pci 0000:01:00.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:07:23.911925 kernel: pci 0000:01:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Nov 8 00:07:23.912108 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 00:07:23.912318 kernel: pci 0000:02:00.0: reg 0x10: [mem 0x10e00000-0x10e03fff 64bit] Nov 8 00:07:23.912498 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 Nov 8 00:07:23.914458 kernel: pci 0000:03:00.0: reg 0x14: [mem 0x10c00000-0x10c00fff] Nov 8 00:07:23.914745 kernel: pci 0000:03:00.0: reg 0x20: [mem 0x8000100000-0x8000103fff 64bit pref] Nov 8 00:07:23.914967 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 Nov 8 00:07:23.915134 kernel: pci 0000:04:00.0: reg 0x20: [mem 0x8000200000-0x8000203fff 64bit pref] Nov 8 00:07:23.915324 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 00:07:23.915492 kernel: pci 0000:05:00.0: reg 0x14: [mem 0x10800000-0x10800fff] Nov 8 00:07:23.917450 kernel: pci 0000:05:00.0: reg 0x20: [mem 0x8000300000-0x8000303fff 64bit pref] Nov 8 00:07:23.917835 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 Nov 8 00:07:23.918013 kernel: pci 0000:06:00.0: reg 0x14: [mem 0x10600000-0x10600fff] Nov 8 00:07:23.918189 kernel: pci 0000:06:00.0: reg 0x20: [mem 0x8000400000-0x8000403fff 64bit pref] Nov 8 00:07:23.918460 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 Nov 8 00:07:23.918676 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x10400000-0x10400fff] Nov 8 00:07:23.918809 kernel: pci 0000:07:00.0: reg 0x20: [mem 0x8000500000-0x8000503fff 64bit pref] Nov 8 00:07:23.918930 kernel: pci 0000:07:00.0: reg 0x30: [mem 0xfff80000-0xffffffff pref] Nov 8 00:07:23.919051 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 8 00:07:23.919170 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 8 00:07:23.919284 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 8 00:07:23.919411 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 8 00:07:23.919527 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 8 00:07:23.921857 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 8 00:07:23.921998 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 8 00:07:23.922113 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 8 00:07:23.922239 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 8 00:07:23.922360 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 8 00:07:23.922477 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 8 00:07:23.922704 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 8 00:07:23.923033 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 8 00:07:23.923115 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 8 00:07:23.923186 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 8 00:07:23.923260 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 00:07:23.923329 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 8 00:07:23.923406 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 8 00:07:23.923487 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 00:07:23.923574 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 8 00:07:23.924596 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 8 00:07:23.924683 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 00:07:23.924755 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 8 00:07:23.924825 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 8 00:07:23.924902 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 00:07:23.924975 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 8 00:07:23.925055 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 8 00:07:23.925129 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x10000000-0x101fffff] Nov 8 00:07:23.925201 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:07:23.925273 kernel: pci 0000:00:02.1: BAR 14: assigned [mem 0x10200000-0x103fffff] Nov 8 00:07:23.925344 kernel: pci 0000:00:02.1: BAR 15: assigned [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:07:23.925416 kernel: pci 0000:00:02.2: BAR 14: assigned [mem 0x10400000-0x105fffff] Nov 8 00:07:23.925492 kernel: pci 0000:00:02.2: BAR 15: assigned [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:07:23.925632 kernel: pci 0000:00:02.3: BAR 14: assigned [mem 0x10600000-0x107fffff] Nov 8 00:07:23.925713 kernel: pci 0000:00:02.3: BAR 15: assigned [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:07:23.925786 kernel: pci 0000:00:02.4: BAR 14: assigned [mem 0x10800000-0x109fffff] Nov 8 00:07:23.925857 kernel: pci 0000:00:02.4: BAR 15: assigned [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:07:23.925928 kernel: pci 0000:00:02.5: BAR 14: assigned [mem 0x10a00000-0x10bfffff] Nov 8 00:07:23.925998 kernel: pci 0000:00:02.5: BAR 15: assigned [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:07:23.926076 kernel: pci 0000:00:02.6: BAR 14: assigned [mem 0x10c00000-0x10dfffff] Nov 8 00:07:23.926146 kernel: pci 0000:00:02.6: BAR 15: assigned [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:07:23.926218 kernel: pci 0000:00:02.7: BAR 14: assigned [mem 0x10e00000-0x10ffffff] Nov 8 00:07:23.926288 kernel: pci 0000:00:02.7: BAR 15: assigned [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:07:23.926359 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x11000000-0x111fffff] Nov 8 00:07:23.926430 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:07:23.926505 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8001200000-0x8001203fff 64bit pref] Nov 8 00:07:23.926645 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x11200000-0x11200fff] Nov 8 00:07:23.926777 kernel: pci 0000:00:02.0: BAR 0: assigned [mem 0x11201000-0x11201fff] Nov 8 00:07:23.926847 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 00:07:23.926912 kernel: pci 0000:00:02.1: BAR 0: assigned [mem 0x11202000-0x11202fff] Nov 8 00:07:23.926977 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 00:07:23.927041 kernel: pci 0000:00:02.2: BAR 0: assigned [mem 0x11203000-0x11203fff] Nov 8 00:07:23.927106 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 00:07:23.927170 kernel: pci 0000:00:02.3: BAR 0: assigned [mem 0x11204000-0x11204fff] Nov 8 00:07:23.927239 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 8 00:07:23.927304 kernel: pci 0000:00:02.4: BAR 0: assigned [mem 0x11205000-0x11205fff] Nov 8 00:07:23.927368 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 8 00:07:23.927432 kernel: pci 0000:00:02.5: BAR 0: assigned [mem 0x11206000-0x11206fff] Nov 8 00:07:23.927497 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 8 00:07:23.927588 kernel: pci 0000:00:02.6: BAR 0: assigned [mem 0x11207000-0x11207fff] Nov 8 00:07:23.927676 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 8 00:07:23.927744 kernel: pci 0000:00:02.7: BAR 0: assigned [mem 0x11208000-0x11208fff] Nov 8 00:07:23.927815 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 8 00:07:23.927882 kernel: pci 0000:00:03.0: BAR 0: assigned [mem 0x11209000-0x11209fff] Nov 8 00:07:23.927948 kernel: pci 0000:00:03.0: BAR 13: assigned [io 0x9000-0x9fff] Nov 8 00:07:23.928024 kernel: pci 0000:00:04.0: BAR 0: assigned [io 0xa000-0xa007] Nov 8 00:07:23.928102 kernel: pci 0000:01:00.0: BAR 6: assigned [mem 0x10000000-0x1007ffff pref] Nov 8 00:07:23.928172 kernel: pci 0000:01:00.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:07:23.928241 kernel: pci 0000:01:00.0: BAR 1: assigned [mem 0x10080000-0x10080fff] Nov 8 00:07:23.928309 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 8 00:07:23.928378 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 8 00:07:23.928443 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 8 00:07:23.928509 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:07:23.928664 kernel: pci 0000:02:00.0: BAR 0: assigned [mem 0x10200000-0x10203fff 64bit] Nov 8 00:07:23.928740 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 8 00:07:23.928804 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 8 00:07:23.928868 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 8 00:07:23.928933 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:07:23.929005 kernel: pci 0000:03:00.0: BAR 4: assigned [mem 0x8000400000-0x8000403fff 64bit pref] Nov 8 00:07:23.929073 kernel: pci 0000:03:00.0: BAR 1: assigned [mem 0x10400000-0x10400fff] Nov 8 00:07:23.929141 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 8 00:07:23.929204 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 8 00:07:23.929274 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 8 00:07:23.929338 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:07:23.929411 kernel: pci 0000:04:00.0: BAR 4: assigned [mem 0x8000600000-0x8000603fff 64bit pref] Nov 8 00:07:23.929486 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 8 00:07:23.929563 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 8 00:07:23.929641 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 8 00:07:23.929721 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:07:23.929795 kernel: pci 0000:05:00.0: BAR 4: assigned [mem 0x8000800000-0x8000803fff 64bit pref] Nov 8 00:07:23.929869 kernel: pci 0000:05:00.0: BAR 1: assigned [mem 0x10800000-0x10800fff] Nov 8 00:07:23.929937 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 8 00:07:23.930002 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 8 00:07:23.930066 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 8 00:07:23.930131 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:07:23.930203 kernel: pci 0000:06:00.0: BAR 4: assigned [mem 0x8000a00000-0x8000a03fff 64bit pref] Nov 8 00:07:23.930271 kernel: pci 0000:06:00.0: BAR 1: assigned [mem 0x10a00000-0x10a00fff] Nov 8 00:07:23.930336 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 8 00:07:23.930404 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 8 00:07:23.930469 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 8 00:07:23.930534 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:07:23.930717 kernel: pci 0000:07:00.0: BAR 6: assigned [mem 0x10c00000-0x10c7ffff pref] Nov 8 00:07:23.930790 kernel: pci 0000:07:00.0: BAR 4: assigned [mem 0x8000c00000-0x8000c03fff 64bit pref] Nov 8 00:07:23.930856 kernel: pci 0000:07:00.0: BAR 1: assigned [mem 0x10c80000-0x10c80fff] Nov 8 00:07:23.930922 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 8 00:07:23.930986 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 8 00:07:23.931055 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 8 00:07:23.931119 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:07:23.931196 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 8 00:07:23.931260 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 8 00:07:23.931324 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 8 00:07:23.931389 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:07:23.931455 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 8 00:07:23.931520 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 8 00:07:23.932079 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 8 00:07:23.932166 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:07:23.932238 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 8 00:07:23.932299 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 8 00:07:23.932358 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 8 00:07:23.932431 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 8 00:07:23.932494 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 8 00:07:23.932679 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 8 00:07:23.932771 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 8 00:07:23.932843 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 8 00:07:23.932909 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 8 00:07:23.932980 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 8 00:07:23.933709 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 8 00:07:23.933795 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 8 00:07:23.933871 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 8 00:07:23.933937 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 8 00:07:23.934012 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 8 00:07:23.934082 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 8 00:07:23.934144 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 8 00:07:23.934207 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 8 00:07:23.934280 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 8 00:07:23.934341 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 8 00:07:23.934406 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 8 00:07:23.934477 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 8 00:07:23.935557 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 8 00:07:23.935682 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 8 00:07:23.935765 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 8 00:07:23.935846 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 8 00:07:23.935916 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 8 00:07:23.936010 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 8 00:07:23.936086 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 8 00:07:23.936156 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 8 00:07:23.936167 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 8 00:07:23.936176 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 8 00:07:23.936185 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 8 00:07:23.936194 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 8 00:07:23.936203 kernel: iommu: Default domain type: Translated Nov 8 00:07:23.936211 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:07:23.936220 kernel: efivars: Registered efivars operations Nov 8 00:07:23.936229 kernel: vgaarb: loaded Nov 8 00:07:23.936240 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:07:23.936249 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:07:23.936258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:07:23.936266 kernel: pnp: PnP ACPI init Nov 8 00:07:23.936353 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 8 00:07:23.936366 kernel: pnp: PnP ACPI: found 1 devices Nov 8 00:07:23.936375 kernel: NET: Registered PF_INET protocol family Nov 8 00:07:23.936384 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:07:23.936395 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:07:23.936404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:07:23.936417 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:07:23.936431 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:07:23.936443 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:07:23.936454 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:07:23.936464 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:07:23.936472 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:07:23.936607 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 8 00:07:23.936626 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:07:23.936635 kernel: kvm [1]: HYP mode not available Nov 8 00:07:23.936644 kernel: Initialise system trusted keyrings Nov 8 00:07:23.936653 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:07:23.936662 kernel: Key type asymmetric registered Nov 8 00:07:23.936671 kernel: Asymmetric key parser 'x509' registered Nov 8 00:07:23.936680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 8 00:07:23.936689 kernel: io scheduler mq-deadline registered Nov 8 00:07:23.936698 kernel: io scheduler kyber registered Nov 8 00:07:23.936708 kernel: io scheduler bfq registered Nov 8 00:07:23.936718 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 8 00:07:23.936835 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 8 00:07:23.936921 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 8 00:07:23.937033 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.937693 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 8 00:07:23.937782 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 8 00:07:23.937863 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.937939 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 8 00:07:23.938418 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 8 00:07:23.938510 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.939333 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 8 00:07:23.939430 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 8 00:07:23.939511 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.939644 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 8 00:07:23.939718 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 8 00:07:23.939861 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.939936 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 8 00:07:23.940002 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 8 00:07:23.940073 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.940143 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 8 00:07:23.940210 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 8 00:07:23.940276 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.940345 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 8 00:07:23.940412 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 8 00:07:23.940482 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.940493 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 8 00:07:23.940605 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 8 00:07:23.940697 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 8 00:07:23.940765 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 00:07:23.940776 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 8 00:07:23.940789 kernel: ACPI: button: Power Button [PWRB] Nov 8 00:07:23.940798 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 8 00:07:23.940873 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 8 00:07:23.940946 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 8 00:07:23.940957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:07:23.940965 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 8 00:07:23.941036 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 8 00:07:23.941047 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 8 00:07:23.941055 kernel: thunder_xcv, ver 1.0 Nov 8 00:07:23.941065 kernel: thunder_bgx, ver 1.0 Nov 8 00:07:23.941073 kernel: nicpf, ver 1.0 Nov 8 00:07:23.941081 kernel: nicvf, ver 1.0 Nov 8 00:07:23.941159 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:07:23.941224 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:07:23 UTC (1762560443) Nov 8 00:07:23.941234 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:07:23.941243 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 8 00:07:23.941251 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 8 00:07:23.941261 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:07:23.941269 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:07:23.941277 kernel: Segment Routing with IPv6 Nov 8 00:07:23.941285 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:07:23.941294 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:07:23.941302 kernel: Key type dns_resolver registered Nov 8 00:07:23.941310 kernel: registered taskstats version 1 Nov 8 00:07:23.941318 kernel: Loading compiled-in X.509 certificates Nov 8 00:07:23.941326 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e35af6a719ba4c60f9d6788b11f5e5836ebf73b5' Nov 8 00:07:23.941335 kernel: Key type .fscrypt registered Nov 8 00:07:23.941343 kernel: Key type fscrypt-provisioning registered Nov 8 00:07:23.941351 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:07:23.941359 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:07:23.941366 kernel: ima: No architecture policies found Nov 8 00:07:23.941374 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:07:23.941382 kernel: clk: Disabling unused clocks Nov 8 00:07:23.941390 kernel: Freeing unused kernel memory: 39424K Nov 8 00:07:23.941398 kernel: Run /init as init process Nov 8 00:07:23.941407 kernel: with arguments: Nov 8 00:07:23.941415 kernel: /init Nov 8 00:07:23.941423 kernel: with environment: Nov 8 00:07:23.941431 kernel: HOME=/ Nov 8 00:07:23.941439 kernel: TERM=linux Nov 8 00:07:23.941449 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:07:23.941459 systemd[1]: Detected virtualization kvm. Nov 8 00:07:23.941468 systemd[1]: Detected architecture arm64. Nov 8 00:07:23.941478 systemd[1]: Running in initrd. Nov 8 00:07:23.941486 systemd[1]: No hostname configured, using default hostname. Nov 8 00:07:23.941495 systemd[1]: Hostname set to . Nov 8 00:07:23.941503 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:07:23.941512 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:07:23.941520 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:07:23.941529 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:07:23.941538 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:07:23.941560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:07:23.941569 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:07:23.941656 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:07:23.941669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:07:23.941678 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:07:23.941687 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:07:23.941695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:07:23.941706 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:07:23.941717 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:07:23.941725 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:07:23.941733 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:07:23.941742 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:07:23.941750 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:07:23.941758 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:07:23.941767 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:07:23.941777 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:07:23.941785 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:07:23.941794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:07:23.941802 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:07:23.941811 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:07:23.941819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:07:23.941828 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:07:23.941836 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:07:23.941844 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:07:23.941855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:07:23.941863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:23.941872 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:07:23.941880 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:07:23.941888 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:07:23.941923 systemd-journald[237]: Collecting audit messages is disabled. Nov 8 00:07:23.941946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:07:23.941955 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:23.941966 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:07:23.941975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:07:23.941986 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:07:23.941996 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:07:23.942006 kernel: Bridge firewalling registered Nov 8 00:07:23.942015 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:07:23.942026 systemd-journald[237]: Journal started Nov 8 00:07:23.942047 systemd-journald[237]: Runtime Journal (/run/log/journal/5d095f3fd63c42c09c12d139b6cced21) is 8.0M, max 76.6M, 68.6M free. Nov 8 00:07:23.914738 systemd-modules-load[238]: Inserted module 'overlay' Nov 8 00:07:23.937428 systemd-modules-load[238]: Inserted module 'br_netfilter' Nov 8 00:07:23.950091 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:07:23.954147 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:07:23.954274 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:23.956940 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:07:23.957737 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:07:23.966860 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:07:23.970799 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:07:23.981089 dracut-cmdline[269]: dracut-dracut-053 Nov 8 00:07:23.986333 dracut-cmdline[269]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=653fdcb8a67e255793a721f32d76976d3ed6223b235b7c618cf75e5edffbdb68 Nov 8 00:07:23.990878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:07:24.003368 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:07:24.032387 systemd-resolved[289]: Positive Trust Anchors: Nov 8 00:07:24.032402 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:07:24.032433 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:07:24.042133 systemd-resolved[289]: Defaulting to hostname 'linux'. Nov 8 00:07:24.044133 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:07:24.045507 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:07:24.078645 kernel: SCSI subsystem initialized Nov 8 00:07:24.083615 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:07:24.091647 kernel: iscsi: registered transport (tcp) Nov 8 00:07:24.105633 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:07:24.105714 kernel: QLogic iSCSI HBA Driver Nov 8 00:07:24.156059 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:07:24.163765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:07:24.182972 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:07:24.183112 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:07:24.183156 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:07:24.233657 kernel: raid6: neonx8 gen() 15530 MB/s Nov 8 00:07:24.250640 kernel: raid6: neonx4 gen() 15377 MB/s Nov 8 00:07:24.267631 kernel: raid6: neonx2 gen() 12912 MB/s Nov 8 00:07:24.284651 kernel: raid6: neonx1 gen() 10308 MB/s Nov 8 00:07:24.301636 kernel: raid6: int64x8 gen() 6810 MB/s Nov 8 00:07:24.318656 kernel: raid6: int64x4 gen() 7256 MB/s Nov 8 00:07:24.335636 kernel: raid6: int64x2 gen() 6052 MB/s Nov 8 00:07:24.352656 kernel: raid6: int64x1 gen() 5012 MB/s Nov 8 00:07:24.352739 kernel: raid6: using algorithm neonx8 gen() 15530 MB/s Nov 8 00:07:24.369639 kernel: raid6: .... xor() 11705 MB/s, rmw enabled Nov 8 00:07:24.369712 kernel: raid6: using neon recovery algorithm Nov 8 00:07:24.374860 kernel: xor: measuring software checksum speed Nov 8 00:07:24.374929 kernel: 8regs : 19754 MB/sec Nov 8 00:07:24.375686 kernel: 32regs : 19267 MB/sec Nov 8 00:07:24.375712 kernel: arm64_neon : 26963 MB/sec Nov 8 00:07:24.375729 kernel: xor: using function: arm64_neon (26963 MB/sec) Nov 8 00:07:24.426614 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:07:24.440285 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:07:24.446739 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:07:24.461984 systemd-udevd[456]: Using default interface naming scheme 'v255'. Nov 8 00:07:24.465491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:07:24.476852 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:07:24.492643 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Nov 8 00:07:24.532937 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:07:24.539800 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:07:24.592007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:07:24.599919 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:07:24.622090 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:07:24.622860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:07:24.624496 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:07:24.625225 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:07:24.632767 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:07:24.646472 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:07:24.689616 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:07:24.689835 kernel: ACPI: bus type USB registered Nov 8 00:07:24.691312 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 8 00:07:24.691371 kernel: usbcore: registered new interface driver usbfs Nov 8 00:07:24.691384 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:07:24.691402 kernel: usbcore: registered new interface driver hub Nov 8 00:07:24.699616 kernel: usbcore: registered new device driver usb Nov 8 00:07:24.707661 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:07:24.708668 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:24.710829 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:07:24.713793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:07:24.714011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:24.715684 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:24.724093 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:24.741784 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:24.752273 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:07:24.758596 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 8 00:07:24.758796 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 8 00:07:24.758941 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:07:24.759629 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 8 00:07:24.763660 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 8 00:07:24.763818 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 8 00:07:24.766016 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 8 00:07:24.766151 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 8 00:07:24.766236 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:07:24.769860 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:07:24.769901 kernel: GPT:17805311 != 80003071 Nov 8 00:07:24.769911 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:07:24.769921 kernel: GPT:17805311 != 80003071 Nov 8 00:07:24.770902 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:07:24.770930 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:24.772669 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 8 00:07:24.777013 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:24.788851 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:07:24.789056 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 8 00:07:24.790756 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 00:07:24.793732 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 8 00:07:24.794088 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 8 00:07:24.795399 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 8 00:07:24.796600 kernel: hub 1-0:1.0: USB hub found Nov 8 00:07:24.796760 kernel: hub 1-0:1.0: 4 ports detected Nov 8 00:07:24.801602 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 00:07:24.805085 kernel: hub 2-0:1.0: USB hub found Nov 8 00:07:24.805329 kernel: hub 2-0:1.0: 4 ports detected Nov 8 00:07:24.817749 kernel: BTRFS: device fsid 55a292e1-3824-4229-a9ae-952140d2698c devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (506) Nov 8 00:07:24.821617 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (520) Nov 8 00:07:24.835568 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:07:24.843383 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:07:24.850614 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:07:24.856139 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:07:24.859449 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:07:24.867784 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:07:24.875894 disk-uuid[575]: Primary Header is updated. Nov 8 00:07:24.875894 disk-uuid[575]: Secondary Entries is updated. Nov 8 00:07:24.875894 disk-uuid[575]: Secondary Header is updated. Nov 8 00:07:24.886603 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:24.890601 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:24.894604 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:25.040408 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 00:07:25.173307 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 8 00:07:25.173359 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 8 00:07:25.174605 kernel: usbcore: registered new interface driver usbhid Nov 8 00:07:25.174632 kernel: usbhid: USB HID core driver Nov 8 00:07:25.280703 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 8 00:07:25.408646 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 8 00:07:25.461663 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 8 00:07:25.902967 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:07:25.903607 disk-uuid[577]: The operation has completed successfully. Nov 8 00:07:25.954151 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:07:25.954269 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:07:25.972878 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:07:25.977543 sh[594]: Success Nov 8 00:07:25.992631 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 8 00:07:26.040515 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:07:26.049047 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:07:26.051607 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:07:26.077842 kernel: BTRFS info (device dm-0): first mount of filesystem 55a292e1-3824-4229-a9ae-952140d2698c Nov 8 00:07:26.077901 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:26.077921 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:07:26.078835 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:07:26.078876 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:07:26.085640 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:07:26.087764 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:07:26.090196 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:07:26.101844 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:07:26.106110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:07:26.116884 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:26.116930 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:26.116951 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:07:26.125612 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:07:26.125665 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:07:26.135632 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:26.135303 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:07:26.143870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:07:26.149784 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:07:26.225239 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:07:26.234417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:07:26.247931 ignition[678]: Ignition 2.19.0 Nov 8 00:07:26.248483 ignition[678]: Stage: fetch-offline Nov 8 00:07:26.248523 ignition[678]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:26.250855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:07:26.248548 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:26.248719 ignition[678]: parsed url from cmdline: "" Nov 8 00:07:26.248722 ignition[678]: no config URL provided Nov 8 00:07:26.248727 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:07:26.248734 ignition[678]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:07:26.248739 ignition[678]: failed to fetch config: resource requires networking Nov 8 00:07:26.248913 ignition[678]: Ignition finished successfully Nov 8 00:07:26.256119 systemd-networkd[780]: lo: Link UP Nov 8 00:07:26.256123 systemd-networkd[780]: lo: Gained carrier Nov 8 00:07:26.257604 systemd-networkd[780]: Enumeration completed Nov 8 00:07:26.258030 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:26.258033 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:26.258309 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:07:26.258808 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:26.258811 systemd-networkd[780]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:26.259292 systemd-networkd[780]: eth0: Link UP Nov 8 00:07:26.259295 systemd-networkd[780]: eth0: Gained carrier Nov 8 00:07:26.259303 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:26.260760 systemd[1]: Reached target network.target - Network. Nov 8 00:07:26.263879 systemd-networkd[780]: eth1: Link UP Nov 8 00:07:26.263883 systemd-networkd[780]: eth1: Gained carrier Nov 8 00:07:26.263891 systemd-networkd[780]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:26.264764 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:07:26.292087 ignition[783]: Ignition 2.19.0 Nov 8 00:07:26.292669 ignition[783]: Stage: fetch Nov 8 00:07:26.292844 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:26.292854 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:26.292948 ignition[783]: parsed url from cmdline: "" Nov 8 00:07:26.292952 ignition[783]: no config URL provided Nov 8 00:07:26.292957 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:07:26.292964 ignition[783]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:07:26.295695 systemd-networkd[780]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:07:26.292981 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 8 00:07:26.294982 ignition[783]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:07:26.319694 systemd-networkd[780]: eth0: DHCPv4 address 46.224.11.50/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:07:26.495615 ignition[783]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 8 00:07:26.502121 ignition[783]: GET result: OK Nov 8 00:07:26.502270 ignition[783]: parsing config with SHA512: 1fb3c74ea93259697e6dfe866045bfaad25e7476b05d01aad1b93d195595f62dd7c534242c98596427f667b33b79cf04d20874e0ef0500dbfe47dbc4a11427cb Nov 8 00:07:26.507261 unknown[783]: fetched base config from "system" Nov 8 00:07:26.507291 unknown[783]: fetched base config from "system" Nov 8 00:07:26.507299 unknown[783]: fetched user config from "hetzner" Nov 8 00:07:26.510347 ignition[783]: fetch: fetch complete Nov 8 00:07:26.510354 ignition[783]: fetch: fetch passed Nov 8 00:07:26.510422 ignition[783]: Ignition finished successfully Nov 8 00:07:26.513370 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:07:26.518783 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:07:26.536104 ignition[790]: Ignition 2.19.0 Nov 8 00:07:26.536116 ignition[790]: Stage: kargs Nov 8 00:07:26.536304 ignition[790]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:26.536313 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:26.539639 ignition[790]: kargs: kargs passed Nov 8 00:07:26.540088 ignition[790]: Ignition finished successfully Nov 8 00:07:26.542623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:07:26.547856 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:07:26.563385 ignition[796]: Ignition 2.19.0 Nov 8 00:07:26.564022 ignition[796]: Stage: disks Nov 8 00:07:26.564266 ignition[796]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:26.564278 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:26.565301 ignition[796]: disks: disks passed Nov 8 00:07:26.567277 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:07:26.565348 ignition[796]: Ignition finished successfully Nov 8 00:07:26.568864 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:07:26.569504 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:07:26.570943 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:07:26.572007 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:07:26.572991 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:07:26.580970 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:07:26.599287 systemd-fsck[805]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 00:07:26.605019 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:07:26.614707 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:07:26.662640 kernel: EXT4-fs (sda9): mounted filesystem ba97f76e-2e9b-450a-8320-3c4b94a19632 r/w with ordered data mode. Quota mode: none. Nov 8 00:07:26.663348 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:07:26.664874 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:07:26.675746 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:07:26.679392 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:07:26.693250 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:07:26.694105 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (813) Nov 8 00:07:26.696747 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:26.696836 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:26.696850 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:07:26.696896 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:07:26.696931 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:07:26.699866 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:07:26.703602 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:07:26.703648 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:07:26.707089 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:07:26.713502 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:07:26.769193 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:07:26.770994 coreos-metadata[815]: Nov 08 00:07:26.770 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 8 00:07:26.773680 coreos-metadata[815]: Nov 08 00:07:26.773 INFO Fetch successful Nov 8 00:07:26.775417 coreos-metadata[815]: Nov 08 00:07:26.775 INFO wrote hostname ci-4081-3-6-n-fb20dfd731 to /sysroot/etc/hostname Nov 8 00:07:26.778934 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:07:26.782340 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:07:26.787621 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:07:26.792861 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:07:26.888697 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:07:26.897728 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:07:26.902555 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:07:26.912598 kernel: BTRFS info (device sda6): last unmount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:26.933777 ignition[930]: INFO : Ignition 2.19.0 Nov 8 00:07:26.934441 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:07:26.936308 ignition[930]: INFO : Stage: mount Nov 8 00:07:26.936308 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:26.936308 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:26.939268 ignition[930]: INFO : mount: mount passed Nov 8 00:07:26.939268 ignition[930]: INFO : Ignition finished successfully Nov 8 00:07:26.941220 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:07:26.947765 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:07:27.078925 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:07:27.085895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:07:27.096221 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (941) Nov 8 00:07:27.096372 kernel: BTRFS info (device sda6): first mount of filesystem 7afafbf9-edbd-49b5-ac90-6fc331f667e9 Nov 8 00:07:27.096415 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:07:27.096709 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:07:27.100625 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:07:27.100706 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:07:27.104870 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:07:27.132425 ignition[959]: INFO : Ignition 2.19.0 Nov 8 00:07:27.132425 ignition[959]: INFO : Stage: files Nov 8 00:07:27.132425 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:27.132425 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:27.132425 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:07:27.137315 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:07:27.137315 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:07:27.140821 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:07:27.142287 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:07:27.144050 unknown[959]: wrote ssh authorized keys file for user: core Nov 8 00:07:27.145196 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:07:27.149175 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:07:27.149175 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 8 00:07:27.274408 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:07:27.351498 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:07:27.351498 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:27.354928 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 8 00:07:27.385706 systemd-networkd[780]: eth1: Gained IPv6LL Nov 8 00:07:27.513928 systemd-networkd[780]: eth0: Gained IPv6LL Nov 8 00:07:27.642067 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:07:28.226228 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:07:28.226228 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:07:28.228918 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:07:28.228918 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:07:28.228918 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:07:28.228918 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:07:28.233439 ignition[959]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:07:28.233439 ignition[959]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:07:28.233439 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:07:28.233439 ignition[959]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:07:28.233439 ignition[959]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:07:28.233439 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:07:28.233439 ignition[959]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:07:28.233439 ignition[959]: INFO : files: files passed Nov 8 00:07:28.233439 ignition[959]: INFO : Ignition finished successfully Nov 8 00:07:28.231253 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:07:28.239737 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:07:28.242026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:07:28.244949 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:07:28.246652 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:07:28.270507 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:07:28.270507 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:07:28.274634 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:07:28.276549 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:07:28.278279 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:07:28.283738 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:07:28.313858 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:07:28.313987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:07:28.315739 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:07:28.317389 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:07:28.318448 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:07:28.326957 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:07:28.344537 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:07:28.351916 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:07:28.365015 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:07:28.366411 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:07:28.367160 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:07:28.368258 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:07:28.368417 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:07:28.369929 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:07:28.371163 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:07:28.372147 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:07:28.373234 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:07:28.374269 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:07:28.375297 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:07:28.376218 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:07:28.377283 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:07:28.378256 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:07:28.379211 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:07:28.380028 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:07:28.380186 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:07:28.381305 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:07:28.382344 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:07:28.383282 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:07:28.387693 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:07:28.388575 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:07:28.388755 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:07:28.391835 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:07:28.392000 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:07:28.393866 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:07:28.394021 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:07:28.395694 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:07:28.395891 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:07:28.404356 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:07:28.406815 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:07:28.407855 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:07:28.408741 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:07:28.412844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:07:28.412996 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:07:28.419966 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:07:28.421680 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:07:28.428801 ignition[1010]: INFO : Ignition 2.19.0 Nov 8 00:07:28.430825 ignition[1010]: INFO : Stage: umount Nov 8 00:07:28.430825 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:07:28.430825 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 8 00:07:28.430825 ignition[1010]: INFO : umount: umount passed Nov 8 00:07:28.430825 ignition[1010]: INFO : Ignition finished successfully Nov 8 00:07:28.434390 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:07:28.435956 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:07:28.436659 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:07:28.438071 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:07:28.439619 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:07:28.440989 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:07:28.441088 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:07:28.442156 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:07:28.442204 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:07:28.443681 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:07:28.443721 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:07:28.444534 systemd[1]: Stopped target network.target - Network. Nov 8 00:07:28.445438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:07:28.445486 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:07:28.446466 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:07:28.447251 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:07:28.451662 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:07:28.452496 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:07:28.454324 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:07:28.455788 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:07:28.455850 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:07:28.457856 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:07:28.457912 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:07:28.459459 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:07:28.459536 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:07:28.461336 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:07:28.461373 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:07:28.462196 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:07:28.462235 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:07:28.463356 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:07:28.464813 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:07:28.471649 systemd-networkd[780]: eth1: DHCPv6 lease lost Nov 8 00:07:28.475630 systemd-networkd[780]: eth0: DHCPv6 lease lost Nov 8 00:07:28.475750 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:07:28.475923 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:07:28.479146 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:07:28.479483 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:07:28.483558 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:07:28.483742 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:07:28.489731 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:07:28.490943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:07:28.491049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:07:28.492737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:07:28.492790 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:07:28.493567 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:07:28.493630 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:07:28.495128 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:07:28.495178 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:07:28.497700 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:07:28.511079 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:07:28.511324 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:07:28.514232 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:07:28.514396 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:07:28.515821 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:07:28.515867 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:07:28.517773 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:07:28.517937 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:07:28.520374 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:07:28.520477 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:07:28.522789 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:07:28.522835 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:07:28.524228 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:07:28.524272 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:07:28.530873 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:07:28.533448 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:07:28.533554 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:07:28.536499 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:07:28.536634 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:07:28.539013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:07:28.539069 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:07:28.539780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:07:28.539823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:28.540750 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:07:28.542615 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:07:28.543952 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:07:28.558885 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:07:28.570477 systemd[1]: Switching root. Nov 8 00:07:28.615969 systemd-journald[237]: Journal stopped Nov 8 00:07:29.459923 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Nov 8 00:07:29.460004 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:07:29.460018 kernel: SELinux: policy capability open_perms=1 Nov 8 00:07:29.460029 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:07:29.460040 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:07:29.460050 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:07:29.460065 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:07:29.460088 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:07:29.460100 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:07:29.460111 kernel: audit: type=1403 audit(1762560448.766:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:07:29.460122 systemd[1]: Successfully loaded SELinux policy in 37.054ms. Nov 8 00:07:29.460144 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.439ms. Nov 8 00:07:29.460157 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:07:29.460169 systemd[1]: Detected virtualization kvm. Nov 8 00:07:29.460185 systemd[1]: Detected architecture arm64. Nov 8 00:07:29.460199 systemd[1]: Detected first boot. Nov 8 00:07:29.460210 systemd[1]: Hostname set to . Nov 8 00:07:29.460222 systemd[1]: Initializing machine ID from VM UUID. Nov 8 00:07:29.460234 zram_generator::config[1054]: No configuration found. Nov 8 00:07:29.460246 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:07:29.460257 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:07:29.460274 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:07:29.460285 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:07:29.460299 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:07:29.460311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:07:29.460322 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:07:29.460334 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:07:29.460346 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:07:29.460357 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:07:29.460369 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:07:29.460380 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:07:29.460393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:07:29.460410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:07:29.460422 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:07:29.460434 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:07:29.460445 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:07:29.460457 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:07:29.460470 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 8 00:07:29.460481 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:07:29.460493 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:07:29.460540 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:07:29.460555 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:07:29.460568 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:07:29.460641 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:07:29.460662 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:07:29.460674 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:07:29.460686 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:07:29.460699 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:07:29.460711 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:07:29.460723 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:07:29.460734 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:07:29.460745 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:07:29.460757 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:07:29.460768 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:07:29.460780 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:07:29.460791 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:07:29.460804 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:07:29.460815 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:07:29.460825 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:07:29.460837 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:07:29.460848 systemd[1]: Reached target machines.target - Containers. Nov 8 00:07:29.460858 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:07:29.460869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:29.460885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:07:29.460912 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:07:29.460924 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:29.460934 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:07:29.460945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:29.460957 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:07:29.460967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:29.460980 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:07:29.460991 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:07:29.461003 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:07:29.461014 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:07:29.461025 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:07:29.461035 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:07:29.461046 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:07:29.461056 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:07:29.461069 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:07:29.461080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:07:29.461090 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:07:29.461101 systemd[1]: Stopped verity-setup.service. Nov 8 00:07:29.461111 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:07:29.461124 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:07:29.461135 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:07:29.461145 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:07:29.461156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:07:29.461167 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:07:29.461178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:07:29.461188 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:29.461199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:29.461209 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:29.461221 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:29.461232 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:07:29.461244 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:07:29.461254 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:07:29.461265 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:07:29.461306 systemd-journald[1120]: Collecting audit messages is disabled. Nov 8 00:07:29.461333 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:07:29.461344 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:07:29.461355 systemd-journald[1120]: Journal started Nov 8 00:07:29.461377 systemd-journald[1120]: Runtime Journal (/run/log/journal/5d095f3fd63c42c09c12d139b6cced21) is 8.0M, max 76.6M, 68.6M free. Nov 8 00:07:29.242700 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:07:29.266027 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:07:29.266439 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:07:29.465603 kernel: fuse: init (API version 7.39) Nov 8 00:07:29.465668 kernel: loop: module loaded Nov 8 00:07:29.469059 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:07:29.469102 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:07:29.474151 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:07:29.474226 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:07:29.482804 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:07:29.488913 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:07:29.507643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:29.507733 kernel: ACPI: bus type drm_connector registered Nov 8 00:07:29.507754 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:07:29.507768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:07:29.507782 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:07:29.507797 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:07:29.511043 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:07:29.516002 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:07:29.520179 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:07:29.522339 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:07:29.523614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:07:29.524485 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:07:29.524632 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:07:29.525392 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:29.525501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:29.528898 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:07:29.530098 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:07:29.533052 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:07:29.561196 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:07:29.566650 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:07:29.571460 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:07:29.587049 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:07:29.588054 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:07:29.592080 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:07:29.608819 kernel: loop0: detected capacity change from 0 to 114328 Nov 8 00:07:29.607632 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:07:29.631615 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:07:29.642379 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:07:29.647672 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:07:29.649342 systemd-journald[1120]: Time spent on flushing to /var/log/journal/5d095f3fd63c42c09c12d139b6cced21 is 31.596ms for 1133 entries. Nov 8 00:07:29.649342 systemd-journald[1120]: System Journal (/var/log/journal/5d095f3fd63c42c09c12d139b6cced21) is 8.0M, max 584.8M, 576.8M free. Nov 8 00:07:29.691829 systemd-journald[1120]: Received client request to flush runtime journal. Nov 8 00:07:29.692192 kernel: loop1: detected capacity change from 0 to 114432 Nov 8 00:07:29.657117 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:07:29.674208 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Nov 8 00:07:29.674220 systemd-tmpfiles[1147]: ACLs are not supported, ignoring. Nov 8 00:07:29.690904 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:07:29.705985 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:07:29.707998 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:07:29.739612 kernel: loop2: detected capacity change from 0 to 8 Nov 8 00:07:29.762149 kernel: loop3: detected capacity change from 0 to 207008 Nov 8 00:07:29.764981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:07:29.766330 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:07:29.776955 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:07:29.784745 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:07:29.807636 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:07:29.813767 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 8 00:07:29.815942 kernel: loop4: detected capacity change from 0 to 114328 Nov 8 00:07:29.813787 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Nov 8 00:07:29.820085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:07:29.827938 kernel: loop5: detected capacity change from 0 to 114432 Nov 8 00:07:29.840604 kernel: loop6: detected capacity change from 0 to 8 Nov 8 00:07:29.844597 kernel: loop7: detected capacity change from 0 to 207008 Nov 8 00:07:29.862612 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 8 00:07:29.863389 (sd-merge)[1196]: Merged extensions into '/usr'. Nov 8 00:07:29.872011 systemd[1]: Reloading requested from client PID 1146 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:07:29.872033 systemd[1]: Reloading... Nov 8 00:07:29.978603 zram_generator::config[1223]: No configuration found. Nov 8 00:07:30.036976 ldconfig[1142]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:07:30.126790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:07:30.173946 systemd[1]: Reloading finished in 301 ms. Nov 8 00:07:30.197275 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:07:30.201941 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:07:30.212973 systemd[1]: Starting ensure-sysext.service... Nov 8 00:07:30.217121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:07:30.229564 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:07:30.229609 systemd[1]: Reloading... Nov 8 00:07:30.253154 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:07:30.253793 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:07:30.256327 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:07:30.256708 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 8 00:07:30.256758 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Nov 8 00:07:30.262614 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:07:30.262739 systemd-tmpfiles[1261]: Skipping /boot Nov 8 00:07:30.270108 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:07:30.270200 systemd-tmpfiles[1261]: Skipping /boot Nov 8 00:07:30.325605 zram_generator::config[1297]: No configuration found. Nov 8 00:07:30.414675 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:07:30.461687 systemd[1]: Reloading finished in 231 ms. Nov 8 00:07:30.485726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:07:30.491015 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:07:30.508072 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:07:30.513910 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:07:30.520884 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:07:30.534753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:07:30.540856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:07:30.542731 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:07:30.546720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:30.551078 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:30.554241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:30.558929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:30.560080 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:30.578090 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:07:30.584080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:30.584253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:30.588437 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:07:30.594491 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:30.601833 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:07:30.602548 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:30.608726 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Nov 8 00:07:30.609058 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:07:30.617275 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:07:30.618672 systemd[1]: Finished ensure-sysext.service. Nov 8 00:07:30.632846 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:07:30.635065 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:07:30.636333 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:30.636904 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:30.638908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:30.639036 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:30.640453 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:07:30.640534 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:07:30.651835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:07:30.652950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:30.653134 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:30.656914 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:07:30.657055 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:07:30.662934 augenrules[1363]: No rules Nov 8 00:07:30.669960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:07:30.671655 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:07:30.672055 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:07:30.682675 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:07:30.711412 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 8 00:07:30.718683 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:07:30.837871 systemd-networkd[1380]: lo: Link UP Nov 8 00:07:30.837882 systemd-networkd[1380]: lo: Gained carrier Nov 8 00:07:30.840941 systemd-networkd[1380]: Enumeration completed Nov 8 00:07:30.841035 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:07:30.841807 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:07:30.842934 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:07:30.846764 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:30.846777 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:30.847616 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:30.847627 systemd-networkd[1380]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:07:30.849740 systemd-networkd[1380]: eth0: Link UP Nov 8 00:07:30.849752 systemd-networkd[1380]: eth0: Gained carrier Nov 8 00:07:30.849766 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:30.866213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:07:30.867847 systemd-networkd[1380]: eth1: Link UP Nov 8 00:07:30.867858 systemd-networkd[1380]: eth1: Gained carrier Nov 8 00:07:30.867878 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:30.875612 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1368) Nov 8 00:07:30.890166 systemd-resolved[1337]: Positive Trust Anchors: Nov 8 00:07:30.890185 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:07:30.890217 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:07:30.895891 systemd-networkd[1380]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 8 00:07:30.897361 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Nov 8 00:07:30.897372 systemd-networkd[1380]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:30.898220 systemd-resolved[1337]: Using system hostname 'ci-4081-3-6-n-fb20dfd731'. Nov 8 00:07:30.900864 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:07:30.901840 systemd[1]: Reached target network.target - Network. Nov 8 00:07:30.903116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:07:30.924750 systemd-networkd[1380]: eth0: DHCPv4 address 46.224.11.50/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 8 00:07:30.925332 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Nov 8 00:07:30.936194 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:07:30.982610 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:07:30.988924 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 8 00:07:30.989052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:07:30.994778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:07:30.996904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:07:30.999726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:07:31.000936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:07:31.000981 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:07:31.005285 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:07:31.009813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:07:31.010937 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:07:31.011090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:07:31.022203 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:07:31.024671 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:07:31.025683 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:07:31.029972 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:07:31.032605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:07:31.034109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:07:31.045632 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:07:31.061617 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 8 00:07:31.061711 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 8 00:07:31.061725 kernel: [drm] features: -context_init Nov 8 00:07:31.064734 kernel: [drm] number of scanouts: 1 Nov 8 00:07:31.064788 kernel: [drm] number of cap sets: 0 Nov 8 00:07:31.068867 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:31.069789 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:01.0 on minor 0 Nov 8 00:07:31.080794 kernel: Console: switching to colour frame buffer device 160x50 Nov 8 00:07:31.083619 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 8 00:07:31.093760 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:07:31.094262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:31.106916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:07:31.169180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:07:31.217732 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:07:31.224843 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:07:31.240617 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:07:31.266699 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:07:31.267821 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:07:31.268557 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:07:31.269430 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:07:31.271755 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:07:31.272622 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:07:31.273241 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:07:31.274010 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:07:31.274648 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:07:31.274684 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:07:31.275121 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:07:31.277657 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:07:31.279927 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:07:31.291480 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:07:31.293672 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:07:31.294841 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:07:31.295447 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:07:31.296077 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:07:31.296622 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:07:31.296654 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:07:31.299746 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:07:31.304612 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:07:31.308677 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:07:31.307766 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:07:31.321315 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:07:31.328392 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:07:31.329752 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:07:31.331919 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:07:31.334511 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:07:31.336985 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 8 00:07:31.338632 jq[1450]: false Nov 8 00:07:31.339444 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:07:31.341777 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:07:31.346738 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:07:31.348461 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:07:31.349038 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:07:31.351143 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:07:31.354772 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:07:31.357184 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:07:31.363969 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:07:31.364137 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:07:31.401009 dbus-daemon[1449]: [system] SELinux support is enabled Nov 8 00:07:31.401198 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:07:31.403804 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:07:31.403842 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:07:31.404888 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:07:31.404904 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:07:31.416976 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:07:31.417164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:07:31.427992 jq[1461]: true Nov 8 00:07:31.431969 extend-filesystems[1453]: Found loop4 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found loop5 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found loop6 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found loop7 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda1 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda2 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda3 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found usr Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda4 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda6 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda7 Nov 8 00:07:31.434240 extend-filesystems[1453]: Found sda9 Nov 8 00:07:31.434240 extend-filesystems[1453]: Checking size of /dev/sda9 Nov 8 00:07:31.463145 coreos-metadata[1448]: Nov 08 00:07:31.435 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 8 00:07:31.463145 coreos-metadata[1448]: Nov 08 00:07:31.440 INFO Fetch successful Nov 8 00:07:31.463145 coreos-metadata[1448]: Nov 08 00:07:31.441 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 8 00:07:31.463145 coreos-metadata[1448]: Nov 08 00:07:31.442 INFO Fetch successful Nov 8 00:07:31.463353 update_engine[1460]: I20251108 00:07:31.444847 1460 main.cc:92] Flatcar Update Engine starting Nov 8 00:07:31.463353 update_engine[1460]: I20251108 00:07:31.456256 1460 update_check_scheduler.cc:74] Next update check in 6m41s Nov 8 00:07:31.436995 (ntainerd)[1475]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:07:31.452610 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:07:31.452778 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:07:31.456618 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:07:31.461854 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:07:31.467523 extend-filesystems[1453]: Resized partition /dev/sda9 Nov 8 00:07:31.470605 tar[1468]: linux-arm64/LICENSE Nov 8 00:07:31.470605 tar[1468]: linux-arm64/helm Nov 8 00:07:31.470807 extend-filesystems[1493]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:07:31.476737 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 8 00:07:31.485004 jq[1482]: true Nov 8 00:07:31.557084 systemd-logind[1459]: New seat seat0. Nov 8 00:07:31.564130 systemd-logind[1459]: Watching system buttons on /dev/input/event0 (Power Button) Nov 8 00:07:31.564157 systemd-logind[1459]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 8 00:07:31.564397 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:07:31.589036 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1373) Nov 8 00:07:31.601254 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:07:31.602234 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:07:31.614238 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:07:31.617987 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:07:31.627612 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 8 00:07:31.631949 systemd[1]: Starting sshkeys.service... Nov 8 00:07:31.648530 extend-filesystems[1493]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:07:31.648530 extend-filesystems[1493]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 8 00:07:31.648530 extend-filesystems[1493]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 8 00:07:31.654789 extend-filesystems[1453]: Resized filesystem in /dev/sda9 Nov 8 00:07:31.654789 extend-filesystems[1453]: Found sr0 Nov 8 00:07:31.649417 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:07:31.650658 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:07:31.675869 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:07:31.680913 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:07:31.769783 coreos-metadata[1527]: Nov 08 00:07:31.769 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 8 00:07:31.769783 coreos-metadata[1527]: Nov 08 00:07:31.769 INFO Fetch successful Nov 8 00:07:31.775337 unknown[1527]: wrote ssh authorized keys file for user: core Nov 8 00:07:31.793306 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:07:31.804635 containerd[1475]: time="2025-11-08T00:07:31.803954800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:07:31.809835 update-ssh-keys[1536]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:07:31.812649 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:07:31.817097 systemd[1]: Finished sshkeys.service. Nov 8 00:07:31.889422 containerd[1475]: time="2025-11-08T00:07:31.889310360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.894783 containerd[1475]: time="2025-11-08T00:07:31.894732160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:31.895452 containerd[1475]: time="2025-11-08T00:07:31.895431760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:07:31.895629 containerd[1475]: time="2025-11-08T00:07:31.895552720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:07:31.895829 containerd[1475]: time="2025-11-08T00:07:31.895808160Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:07:31.896150 containerd[1475]: time="2025-11-08T00:07:31.896130000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.896599 containerd[1475]: time="2025-11-08T00:07:31.896308880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:31.896599 containerd[1475]: time="2025-11-08T00:07:31.896333280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.897117 containerd[1475]: time="2025-11-08T00:07:31.897091960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:31.897988 containerd[1475]: time="2025-11-08T00:07:31.897611200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.897988 containerd[1475]: time="2025-11-08T00:07:31.897641520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:31.897988 containerd[1475]: time="2025-11-08T00:07:31.897652440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.897988 containerd[1475]: time="2025-11-08T00:07:31.897753200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.897988 containerd[1475]: time="2025-11-08T00:07:31.897953640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:07:31.898762 containerd[1475]: time="2025-11-08T00:07:31.898732560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:07:31.899051 containerd[1475]: time="2025-11-08T00:07:31.899032640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:07:31.899272 containerd[1475]: time="2025-11-08T00:07:31.899195280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:07:31.899692 containerd[1475]: time="2025-11-08T00:07:31.899670680Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:07:31.906299 containerd[1475]: time="2025-11-08T00:07:31.906226680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:07:31.906738 containerd[1475]: time="2025-11-08T00:07:31.906441360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:07:31.906738 containerd[1475]: time="2025-11-08T00:07:31.906466120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:07:31.906847 containerd[1475]: time="2025-11-08T00:07:31.906830160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:07:31.907180 containerd[1475]: time="2025-11-08T00:07:31.906949200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:07:31.907877 containerd[1475]: time="2025-11-08T00:07:31.907840800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:07:31.909134 containerd[1475]: time="2025-11-08T00:07:31.909017960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:07:31.909900 containerd[1475]: time="2025-11-08T00:07:31.909812160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:07:31.910201 containerd[1475]: time="2025-11-08T00:07:31.909838480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:07:31.910201 containerd[1475]: time="2025-11-08T00:07:31.910150160Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:07:31.910201 containerd[1475]: time="2025-11-08T00:07:31.910172560Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.910563 containerd[1475]: time="2025-11-08T00:07:31.910186400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.910563 containerd[1475]: time="2025-11-08T00:07:31.910329000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.910563 containerd[1475]: time="2025-11-08T00:07:31.910347640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.910563 containerd[1475]: time="2025-11-08T00:07:31.910362760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910375080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910708640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910816200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910843160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910866480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910879680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910894040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910906520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910919040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910934240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910950840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.910984440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.911000160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911602 containerd[1475]: time="2025-11-08T00:07:31.911013840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911870 containerd[1475]: time="2025-11-08T00:07:31.911026800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911870 containerd[1475]: time="2025-11-08T00:07:31.911040000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911870 containerd[1475]: time="2025-11-08T00:07:31.911056120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:07:31.911870 containerd[1475]: time="2025-11-08T00:07:31.911080880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911870 containerd[1475]: time="2025-11-08T00:07:31.911092960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.911870 containerd[1475]: time="2025-11-08T00:07:31.911104240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:07:31.912954 containerd[1475]: time="2025-11-08T00:07:31.912366960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:07:31.913046 containerd[1475]: time="2025-11-08T00:07:31.913028240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:07:31.913155 containerd[1475]: time="2025-11-08T00:07:31.913139400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:07:31.913607 containerd[1475]: time="2025-11-08T00:07:31.913472760Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:07:31.913607 containerd[1475]: time="2025-11-08T00:07:31.913489760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.913607 containerd[1475]: time="2025-11-08T00:07:31.913556960Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:07:31.913607 containerd[1475]: time="2025-11-08T00:07:31.913568080Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:07:31.914729 containerd[1475]: time="2025-11-08T00:07:31.913724640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:07:31.914780 containerd[1475]: time="2025-11-08T00:07:31.914080080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:07:31.914780 containerd[1475]: time="2025-11-08T00:07:31.914147520Z" level=info msg="Connect containerd service" Nov 8 00:07:31.914780 containerd[1475]: time="2025-11-08T00:07:31.914186560Z" level=info msg="using legacy CRI server" Nov 8 00:07:31.914780 containerd[1475]: time="2025-11-08T00:07:31.914193800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:07:31.915438 containerd[1475]: time="2025-11-08T00:07:31.915416120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:07:31.917415 containerd[1475]: time="2025-11-08T00:07:31.917185520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:07:31.918169 containerd[1475]: time="2025-11-08T00:07:31.918128200Z" level=info msg="Start subscribing containerd event" Nov 8 00:07:31.919798 containerd[1475]: time="2025-11-08T00:07:31.919776560Z" level=info msg="Start recovering state" Nov 8 00:07:31.919942 containerd[1475]: time="2025-11-08T00:07:31.919927640Z" level=info msg="Start event monitor" Nov 8 00:07:31.920153 containerd[1475]: time="2025-11-08T00:07:31.920136800Z" level=info msg="Start snapshots syncer" Nov 8 00:07:31.920233 containerd[1475]: time="2025-11-08T00:07:31.920219760Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:07:31.920323 containerd[1475]: time="2025-11-08T00:07:31.920309360Z" level=info msg="Start streaming server" Nov 8 00:07:31.920809 containerd[1475]: time="2025-11-08T00:07:31.919079520Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:07:31.922610 containerd[1475]: time="2025-11-08T00:07:31.921625600Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:07:31.923435 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:07:31.924660 containerd[1475]: time="2025-11-08T00:07:31.924640960Z" level=info msg="containerd successfully booted in 0.124171s" Nov 8 00:07:32.163868 tar[1468]: linux-arm64/README.md Nov 8 00:07:32.177944 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:07:32.186768 systemd-networkd[1380]: eth0: Gained IPv6LL Nov 8 00:07:32.187348 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Nov 8 00:07:32.192189 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:07:32.193771 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:07:32.203940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:32.208943 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:07:32.254625 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:07:32.761783 systemd-networkd[1380]: eth1: Gained IPv6LL Nov 8 00:07:32.762311 systemd-timesyncd[1358]: Network configuration changed, trying to establish connection. Nov 8 00:07:33.018815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:33.026318 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:33.501392 kubelet[1562]: E1108 00:07:33.501216 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:33.505822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:33.506063 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:33.700499 sshd_keygen[1485]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:07:33.724166 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:07:33.731854 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:07:33.741316 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:07:33.741621 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:07:33.750026 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:07:33.759955 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:07:33.766890 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:07:33.768839 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 8 00:07:33.770426 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:07:33.771569 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:07:33.772722 systemd[1]: Startup finished in 774ms (kernel) + 5.080s (initrd) + 5.043s (userspace) = 10.898s. Nov 8 00:07:43.616275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:07:43.621927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:43.728638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:43.741199 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:43.795376 kubelet[1598]: E1108 00:07:43.795291 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:43.798083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:43.798350 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:07:53.866844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:07:53.874930 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:07:53.990489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:07:54.001175 (kubelet)[1613]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:07:54.054538 kubelet[1613]: E1108 00:07:54.054455 1613 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:07:54.058773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:07:54.059038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:03.478574 systemd-resolved[1337]: Clock change detected. Flushing caches. Nov 8 00:08:03.478791 systemd-timesyncd[1358]: Contacted time server 116.203.244.102:123 (2.flatcar.pool.ntp.org). Nov 8 00:08:03.478875 systemd-timesyncd[1358]: Initial clock synchronization to Sat 2025-11-08 00:08:03.478525 UTC. Nov 8 00:08:04.600266 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:08:04.607542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:04.728145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:04.733819 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:08:04.779005 kubelet[1628]: E1108 00:08:04.778916 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:08:04.782961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:08:04.783313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:10.556006 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:08:10.563625 systemd[1]: Started sshd@0-46.224.11.50:22-139.178.68.195:58190.service - OpenSSH per-connection server daemon (139.178.68.195:58190). Nov 8 00:08:11.503186 sshd[1636]: Accepted publickey for core from 139.178.68.195 port 58190 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:11.505726 sshd[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:11.517508 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:08:11.523450 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:08:11.527908 systemd-logind[1459]: New session 1 of user core. Nov 8 00:08:11.536577 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:08:11.544615 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:08:11.549669 (systemd)[1640]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:08:11.659418 systemd[1640]: Queued start job for default target default.target. Nov 8 00:08:11.669669 systemd[1640]: Created slice app.slice - User Application Slice. Nov 8 00:08:11.669996 systemd[1640]: Reached target paths.target - Paths. Nov 8 00:08:11.670028 systemd[1640]: Reached target timers.target - Timers. Nov 8 00:08:11.672052 systemd[1640]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:08:11.687987 systemd[1640]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:08:11.688169 systemd[1640]: Reached target sockets.target - Sockets. Nov 8 00:08:11.688191 systemd[1640]: Reached target basic.target - Basic System. Nov 8 00:08:11.688260 systemd[1640]: Reached target default.target - Main User Target. Nov 8 00:08:11.688300 systemd[1640]: Startup finished in 131ms. Nov 8 00:08:11.688590 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:08:11.695425 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:08:12.364949 systemd[1]: Started sshd@1-46.224.11.50:22-139.178.68.195:58192.service - OpenSSH per-connection server daemon (139.178.68.195:58192). Nov 8 00:08:13.292540 sshd[1651]: Accepted publickey for core from 139.178.68.195 port 58192 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:13.295021 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:13.300502 systemd-logind[1459]: New session 2 of user core. Nov 8 00:08:13.309476 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:08:13.939572 sshd[1651]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:13.944838 systemd[1]: sshd@1-46.224.11.50:22-139.178.68.195:58192.service: Deactivated successfully. Nov 8 00:08:13.946909 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:08:13.947930 systemd-logind[1459]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:08:13.949232 systemd-logind[1459]: Removed session 2. Nov 8 00:08:14.114649 systemd[1]: Started sshd@2-46.224.11.50:22-139.178.68.195:47920.service - OpenSSH per-connection server daemon (139.178.68.195:47920). Nov 8 00:08:14.850535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:08:14.858551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:14.971736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:14.976069 (kubelet)[1668]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:08:15.015220 kubelet[1668]: E1108 00:08:15.015126 1668 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:08:15.018044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:08:15.018355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:15.070700 sshd[1658]: Accepted publickey for core from 139.178.68.195 port 47920 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:15.073681 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:15.080491 systemd-logind[1459]: New session 3 of user core. Nov 8 00:08:15.086578 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:08:15.733755 sshd[1658]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:15.739253 systemd[1]: sshd@2-46.224.11.50:22-139.178.68.195:47920.service: Deactivated successfully. Nov 8 00:08:15.742454 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:08:15.745743 systemd-logind[1459]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:08:15.747483 systemd-logind[1459]: Removed session 3. Nov 8 00:08:15.892786 systemd[1]: Started sshd@3-46.224.11.50:22-139.178.68.195:47930.service - OpenSSH per-connection server daemon (139.178.68.195:47930). Nov 8 00:08:16.829048 sshd[1679]: Accepted publickey for core from 139.178.68.195 port 47930 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:16.831565 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:16.837029 systemd-logind[1459]: New session 4 of user core. Nov 8 00:08:16.851482 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:08:17.358527 update_engine[1460]: I20251108 00:08:17.358388 1460 update_attempter.cc:509] Updating boot flags... Nov 8 00:08:17.403184 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1692) Nov 8 00:08:17.456333 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1691) Nov 8 00:08:17.480325 sshd[1679]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:17.485725 systemd[1]: sshd@3-46.224.11.50:22-139.178.68.195:47930.service: Deactivated successfully. Nov 8 00:08:17.487867 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:08:17.489352 systemd-logind[1459]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:08:17.494824 systemd-logind[1459]: Removed session 4. Nov 8 00:08:17.517386 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1691) Nov 8 00:08:17.653698 systemd[1]: Started sshd@4-46.224.11.50:22-139.178.68.195:47944.service - OpenSSH per-connection server daemon (139.178.68.195:47944). Nov 8 00:08:18.591723 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 47944 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:18.593777 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:18.598200 systemd-logind[1459]: New session 5 of user core. Nov 8 00:08:18.605436 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:08:19.102538 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:08:19.102832 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:08:19.122563 sudo[1710]: pam_unix(sudo:session): session closed for user root Nov 8 00:08:19.276042 sshd[1707]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:19.282600 systemd[1]: sshd@4-46.224.11.50:22-139.178.68.195:47944.service: Deactivated successfully. Nov 8 00:08:19.285910 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:08:19.286885 systemd-logind[1459]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:08:19.287946 systemd-logind[1459]: Removed session 5. Nov 8 00:08:19.444606 systemd[1]: Started sshd@5-46.224.11.50:22-139.178.68.195:47954.service - OpenSSH per-connection server daemon (139.178.68.195:47954). Nov 8 00:08:20.373221 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 47954 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:20.375028 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:20.379626 systemd-logind[1459]: New session 6 of user core. Nov 8 00:08:20.387502 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:08:20.871233 sudo[1719]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:08:20.871513 sudo[1719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:08:20.876440 sudo[1719]: pam_unix(sudo:session): session closed for user root Nov 8 00:08:20.882391 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:08:20.882727 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:08:20.901685 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:08:20.903403 auditctl[1722]: No rules Nov 8 00:08:20.903794 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:08:20.903990 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:08:20.907713 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:08:20.937389 augenrules[1740]: No rules Nov 8 00:08:20.940282 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:08:20.941941 sudo[1718]: pam_unix(sudo:session): session closed for user root Nov 8 00:08:21.093592 sshd[1715]: pam_unix(sshd:session): session closed for user core Nov 8 00:08:21.099712 systemd[1]: sshd@5-46.224.11.50:22-139.178.68.195:47954.service: Deactivated successfully. Nov 8 00:08:21.102071 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:08:21.103189 systemd-logind[1459]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:08:21.104470 systemd-logind[1459]: Removed session 6. Nov 8 00:08:21.261580 systemd[1]: Started sshd@6-46.224.11.50:22-139.178.68.195:47968.service - OpenSSH per-connection server daemon (139.178.68.195:47968). Nov 8 00:08:22.188649 sshd[1748]: Accepted publickey for core from 139.178.68.195 port 47968 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:08:22.190909 sshd[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:08:22.194929 systemd-logind[1459]: New session 7 of user core. Nov 8 00:08:22.207466 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:08:22.684699 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:08:22.685368 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:08:22.968698 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:08:22.968975 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:08:23.219991 dockerd[1766]: time="2025-11-08T00:08:23.219554556Z" level=info msg="Starting up" Nov 8 00:08:23.292238 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2539838521-merged.mount: Deactivated successfully. Nov 8 00:08:23.315935 dockerd[1766]: time="2025-11-08T00:08:23.315623756Z" level=info msg="Loading containers: start." Nov 8 00:08:23.428157 kernel: Initializing XFRM netlink socket Nov 8 00:08:23.523666 systemd-networkd[1380]: docker0: Link UP Nov 8 00:08:23.540867 dockerd[1766]: time="2025-11-08T00:08:23.539906436Z" level=info msg="Loading containers: done." Nov 8 00:08:23.557745 dockerd[1766]: time="2025-11-08T00:08:23.557674516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:08:23.557853 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck441490619-merged.mount: Deactivated successfully. Nov 8 00:08:23.558111 dockerd[1766]: time="2025-11-08T00:08:23.558088956Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:08:23.558693 dockerd[1766]: time="2025-11-08T00:08:23.558672356Z" level=info msg="Daemon has completed initialization" Nov 8 00:08:23.599387 dockerd[1766]: time="2025-11-08T00:08:23.599236236Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:08:23.600270 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:08:24.660725 containerd[1475]: time="2025-11-08T00:08:24.660645116Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:08:25.100074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:08:25.107450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:25.235048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:25.240495 (kubelet)[1912]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:08:25.296186 kubelet[1912]: E1108 00:08:25.295796 1912 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:08:25.299757 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:08:25.299899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:25.313420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204035767.mount: Deactivated successfully. Nov 8 00:08:26.148779 containerd[1475]: time="2025-11-08T00:08:26.147648996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:26.148779 containerd[1475]: time="2025-11-08T00:08:26.148741116Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363783" Nov 8 00:08:26.149479 containerd[1475]: time="2025-11-08T00:08:26.149429836Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:26.152069 containerd[1475]: time="2025-11-08T00:08:26.152033196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:26.153628 containerd[1475]: time="2025-11-08T00:08:26.153586676Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.49289864s" Nov 8 00:08:26.153628 containerd[1475]: time="2025-11-08T00:08:26.153625156Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 8 00:08:26.154413 containerd[1475]: time="2025-11-08T00:08:26.154271476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:08:27.324166 containerd[1475]: time="2025-11-08T00:08:27.324082676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:27.326472 containerd[1475]: time="2025-11-08T00:08:27.326416796Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531220" Nov 8 00:08:27.329764 containerd[1475]: time="2025-11-08T00:08:27.329551436Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:27.332612 containerd[1475]: time="2025-11-08T00:08:27.332580476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:27.333839 containerd[1475]: time="2025-11-08T00:08:27.333800396Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.17949944s" Nov 8 00:08:27.333892 containerd[1475]: time="2025-11-08T00:08:27.333840076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 8 00:08:27.334534 containerd[1475]: time="2025-11-08T00:08:27.334343076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:08:28.314145 containerd[1475]: time="2025-11-08T00:08:28.314035996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:28.317160 containerd[1475]: time="2025-11-08T00:08:28.316116476Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484344" Nov 8 00:08:28.318232 containerd[1475]: time="2025-11-08T00:08:28.318184476Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:28.322168 containerd[1475]: time="2025-11-08T00:08:28.321998596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:28.324762 containerd[1475]: time="2025-11-08T00:08:28.324356236Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 989.98032ms" Nov 8 00:08:28.324762 containerd[1475]: time="2025-11-08T00:08:28.324389996Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 8 00:08:28.325500 containerd[1475]: time="2025-11-08T00:08:28.325311236Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:08:29.260345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount167414258.mount: Deactivated successfully. Nov 8 00:08:29.579258 containerd[1475]: time="2025-11-08T00:08:29.579116996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:29.580842 containerd[1475]: time="2025-11-08T00:08:29.580804676Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417843" Nov 8 00:08:29.581847 containerd[1475]: time="2025-11-08T00:08:29.581805636Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:29.588750 containerd[1475]: time="2025-11-08T00:08:29.588658756Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.26330396s" Nov 8 00:08:29.588834 containerd[1475]: time="2025-11-08T00:08:29.588753956Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 8 00:08:29.590121 containerd[1475]: time="2025-11-08T00:08:29.588943476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:29.590302 containerd[1475]: time="2025-11-08T00:08:29.590257836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:08:30.236192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2639067100.mount: Deactivated successfully. Nov 8 00:08:30.915737 containerd[1475]: time="2025-11-08T00:08:30.915648716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:30.918490 containerd[1475]: time="2025-11-08T00:08:30.918421116Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Nov 8 00:08:30.919951 containerd[1475]: time="2025-11-08T00:08:30.919872916Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:30.923898 containerd[1475]: time="2025-11-08T00:08:30.923822596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:30.925537 containerd[1475]: time="2025-11-08T00:08:30.925397916Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3350842s" Nov 8 00:08:30.925537 containerd[1475]: time="2025-11-08T00:08:30.925434316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 8 00:08:30.926110 containerd[1475]: time="2025-11-08T00:08:30.926088196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:08:31.421822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916027669.mount: Deactivated successfully. Nov 8 00:08:31.430164 containerd[1475]: time="2025-11-08T00:08:31.428519956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:31.430595 containerd[1475]: time="2025-11-08T00:08:31.430568996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 8 00:08:31.432098 containerd[1475]: time="2025-11-08T00:08:31.432067116Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:31.438295 containerd[1475]: time="2025-11-08T00:08:31.438238516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:31.439378 containerd[1475]: time="2025-11-08T00:08:31.439206276Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.98268ms" Nov 8 00:08:31.439378 containerd[1475]: time="2025-11-08T00:08:31.439248876Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 8 00:08:31.440646 containerd[1475]: time="2025-11-08T00:08:31.440472476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:08:32.033633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1078137917.mount: Deactivated successfully. Nov 8 00:08:33.435887 containerd[1475]: time="2025-11-08T00:08:33.435819436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:33.439170 containerd[1475]: time="2025-11-08T00:08:33.437550596Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Nov 8 00:08:33.440617 containerd[1475]: time="2025-11-08T00:08:33.440557996Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:33.446543 containerd[1475]: time="2025-11-08T00:08:33.446501276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:33.447318 containerd[1475]: time="2025-11-08T00:08:33.447283076Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.00678044s" Nov 8 00:08:33.447432 containerd[1475]: time="2025-11-08T00:08:33.447414196Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 8 00:08:35.351392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:08:35.360500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:35.485420 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:35.489366 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:08:35.536791 kubelet[2131]: E1108 00:08:35.536745 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:08:35.539470 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:08:35.539615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:08:39.398263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:39.406718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:39.442278 systemd[1]: Reloading requested from client PID 2145 ('systemctl') (unit session-7.scope)... Nov 8 00:08:39.442301 systemd[1]: Reloading... Nov 8 00:08:39.564160 zram_generator::config[2188]: No configuration found. Nov 8 00:08:39.657699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:08:39.726709 systemd[1]: Reloading finished in 283 ms. Nov 8 00:08:39.777155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:39.782075 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:39.783828 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:08:39.784070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:39.788435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:39.898287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:39.910610 (kubelet)[2235]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:08:39.949662 kubelet[2235]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:39.949662 kubelet[2235]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:08:39.949662 kubelet[2235]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:39.950157 kubelet[2235]: I1108 00:08:39.949733 2235 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:08:40.727615 kubelet[2235]: I1108 00:08:40.727567 2235 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:08:40.727615 kubelet[2235]: I1108 00:08:40.727607 2235 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:08:40.727968 kubelet[2235]: I1108 00:08:40.727935 2235 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:08:40.756847 kubelet[2235]: E1108 00:08:40.756801 2235 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://46.224.11.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:40.759079 kubelet[2235]: I1108 00:08:40.758424 2235 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:08:40.771551 kubelet[2235]: E1108 00:08:40.771431 2235 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:08:40.771815 kubelet[2235]: I1108 00:08:40.771798 2235 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:08:40.774302 kubelet[2235]: I1108 00:08:40.774281 2235 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:08:40.775559 kubelet[2235]: I1108 00:08:40.775515 2235 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:08:40.776318 kubelet[2235]: I1108 00:08:40.775665 2235 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-fb20dfd731","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:08:40.776318 kubelet[2235]: I1108 00:08:40.775918 2235 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:08:40.776318 kubelet[2235]: I1108 00:08:40.775928 2235 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:08:40.776318 kubelet[2235]: I1108 00:08:40.776122 2235 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:40.779608 kubelet[2235]: I1108 00:08:40.779589 2235 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:08:40.779754 kubelet[2235]: I1108 00:08:40.779741 2235 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:08:40.779826 kubelet[2235]: I1108 00:08:40.779817 2235 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:08:40.779885 kubelet[2235]: I1108 00:08:40.779876 2235 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:08:40.785391 kubelet[2235]: W1108 00:08:40.785336 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.224.11.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fb20dfd731&limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:40.785463 kubelet[2235]: E1108 00:08:40.785405 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.224.11.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fb20dfd731&limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:40.786751 kubelet[2235]: W1108 00:08:40.786702 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.224.11.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:40.787410 kubelet[2235]: E1108 00:08:40.786760 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.224.11.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:40.787410 kubelet[2235]: I1108 00:08:40.786869 2235 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:08:40.787595 kubelet[2235]: I1108 00:08:40.787571 2235 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:08:40.789148 kubelet[2235]: W1108 00:08:40.787747 2235 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:08:40.789229 kubelet[2235]: I1108 00:08:40.789201 2235 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:08:40.789255 kubelet[2235]: I1108 00:08:40.789236 2235 server.go:1287] "Started kubelet" Nov 8 00:08:40.791516 kubelet[2235]: I1108 00:08:40.791485 2235 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:08:40.792437 kubelet[2235]: I1108 00:08:40.792420 2235 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:08:40.793349 kubelet[2235]: I1108 00:08:40.793287 2235 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:08:40.793623 kubelet[2235]: I1108 00:08:40.793583 2235 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:08:40.794056 kubelet[2235]: E1108 00:08:40.793798 2235 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.11.50:6443/api/v1/namespaces/default/events\": dial tcp 46.224.11.50:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-6-n-fb20dfd731.1875df7488c4c63b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-6-n-fb20dfd731,UID:ci-4081-3-6-n-fb20dfd731,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-fb20dfd731,},FirstTimestamp:2025-11-08 00:08:40.789214779 +0000 UTC m=+0.873025352,LastTimestamp:2025-11-08 00:08:40.789214779 +0000 UTC m=+0.873025352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-fb20dfd731,}" Nov 8 00:08:40.795849 kubelet[2235]: I1108 00:08:40.795828 2235 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:08:40.796222 kubelet[2235]: I1108 00:08:40.796193 2235 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:08:40.800210 kubelet[2235]: E1108 00:08:40.800173 2235 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" Nov 8 00:08:40.800286 kubelet[2235]: I1108 00:08:40.800224 2235 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:08:40.800459 kubelet[2235]: I1108 00:08:40.800432 2235 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:08:40.800504 kubelet[2235]: I1108 00:08:40.800495 2235 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:08:40.801260 kubelet[2235]: W1108 00:08:40.801211 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.224.11.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:40.801335 kubelet[2235]: E1108 00:08:40.801267 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.224.11.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:40.801463 kubelet[2235]: I1108 00:08:40.801442 2235 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:08:40.801541 kubelet[2235]: I1108 00:08:40.801525 2235 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:08:40.802588 kubelet[2235]: E1108 00:08:40.802448 2235 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:08:40.803515 kubelet[2235]: E1108 00:08:40.803488 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.11.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fb20dfd731?timeout=10s\": dial tcp 46.224.11.50:6443: connect: connection refused" interval="200ms" Nov 8 00:08:40.803739 kubelet[2235]: I1108 00:08:40.803720 2235 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:08:40.817035 kubelet[2235]: I1108 00:08:40.816875 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:08:40.818874 kubelet[2235]: I1108 00:08:40.818848 2235 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:08:40.819350 kubelet[2235]: I1108 00:08:40.818994 2235 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:08:40.819350 kubelet[2235]: I1108 00:08:40.819030 2235 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:08:40.819350 kubelet[2235]: I1108 00:08:40.819040 2235 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:08:40.819350 kubelet[2235]: E1108 00:08:40.819091 2235 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:08:40.830227 kubelet[2235]: W1108 00:08:40.829469 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.224.11.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:40.830227 kubelet[2235]: E1108 00:08:40.829540 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.224.11.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:40.831276 kubelet[2235]: I1108 00:08:40.831256 2235 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:08:40.831394 kubelet[2235]: I1108 00:08:40.831380 2235 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:08:40.831461 kubelet[2235]: I1108 00:08:40.831453 2235 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:40.836608 kubelet[2235]: I1108 00:08:40.836569 2235 policy_none.go:49] "None policy: Start" Nov 8 00:08:40.837382 kubelet[2235]: I1108 00:08:40.836841 2235 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:08:40.837382 kubelet[2235]: I1108 00:08:40.836901 2235 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:08:40.844210 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:08:40.865227 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:08:40.868862 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:08:40.878734 kubelet[2235]: I1108 00:08:40.878589 2235 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:08:40.878895 kubelet[2235]: I1108 00:08:40.878884 2235 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:08:40.878959 kubelet[2235]: I1108 00:08:40.878901 2235 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:08:40.880288 kubelet[2235]: I1108 00:08:40.879669 2235 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:08:40.881183 kubelet[2235]: E1108 00:08:40.881089 2235 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:08:40.881342 kubelet[2235]: E1108 00:08:40.881327 2235 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-6-n-fb20dfd731\" not found" Nov 8 00:08:40.932763 systemd[1]: Created slice kubepods-burstable-podf61aadfc756ae34e9a5ec4d8082bca36.slice - libcontainer container kubepods-burstable-podf61aadfc756ae34e9a5ec4d8082bca36.slice. Nov 8 00:08:40.941980 kubelet[2235]: E1108 00:08:40.941561 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:40.946963 systemd[1]: Created slice kubepods-burstable-pod36c200186146399d5c161eedacb58c27.slice - libcontainer container kubepods-burstable-pod36c200186146399d5c161eedacb58c27.slice. Nov 8 00:08:40.957324 kubelet[2235]: E1108 00:08:40.957276 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:40.960327 systemd[1]: Created slice kubepods-burstable-podcb3de5340479675e9debb939b657700e.slice - libcontainer container kubepods-burstable-podcb3de5340479675e9debb939b657700e.slice. Nov 8 00:08:40.962715 kubelet[2235]: E1108 00:08:40.962605 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:40.982367 kubelet[2235]: I1108 00:08:40.982235 2235 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:40.985137 kubelet[2235]: E1108 00:08:40.985067 2235 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.11.50:6443/api/v1/nodes\": dial tcp 46.224.11.50:6443: connect: connection refused" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.005119 kubelet[2235]: E1108 00:08:41.005041 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.11.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fb20dfd731?timeout=10s\": dial tcp 46.224.11.50:6443: connect: connection refused" interval="400ms" Nov 8 00:08:41.101512 kubelet[2235]: I1108 00:08:41.101448 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f61aadfc756ae34e9a5ec4d8082bca36-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" (UID: \"f61aadfc756ae34e9a5ec4d8082bca36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101692 kubelet[2235]: I1108 00:08:41.101566 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f61aadfc756ae34e9a5ec4d8082bca36-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" (UID: \"f61aadfc756ae34e9a5ec4d8082bca36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101692 kubelet[2235]: I1108 00:08:41.101680 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101802 kubelet[2235]: I1108 00:08:41.101756 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101848 kubelet[2235]: I1108 00:08:41.101806 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101894 kubelet[2235]: I1108 00:08:41.101846 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f61aadfc756ae34e9a5ec4d8082bca36-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" (UID: \"f61aadfc756ae34e9a5ec4d8082bca36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101947 kubelet[2235]: I1108 00:08:41.101931 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.101990 kubelet[2235]: I1108 00:08:41.101967 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.102049 kubelet[2235]: I1108 00:08:41.102005 2235 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb3de5340479675e9debb939b657700e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-fb20dfd731\" (UID: \"cb3de5340479675e9debb939b657700e\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.189093 kubelet[2235]: I1108 00:08:41.188531 2235 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.189311 kubelet[2235]: E1108 00:08:41.189264 2235 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.11.50:6443/api/v1/nodes\": dial tcp 46.224.11.50:6443: connect: connection refused" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.244358 containerd[1475]: time="2025-11-08T00:08:41.244114297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-fb20dfd731,Uid:f61aadfc756ae34e9a5ec4d8082bca36,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:41.259209 containerd[1475]: time="2025-11-08T00:08:41.259116995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-fb20dfd731,Uid:36c200186146399d5c161eedacb58c27,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:41.264606 containerd[1475]: time="2025-11-08T00:08:41.264327776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-fb20dfd731,Uid:cb3de5340479675e9debb939b657700e,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:41.406885 kubelet[2235]: E1108 00:08:41.406778 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.11.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fb20dfd731?timeout=10s\": dial tcp 46.224.11.50:6443: connect: connection refused" interval="800ms" Nov 8 00:08:41.592298 kubelet[2235]: I1108 00:08:41.591696 2235 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.592298 kubelet[2235]: E1108 00:08:41.592171 2235 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.11.50:6443/api/v1/nodes\": dial tcp 46.224.11.50:6443: connect: connection refused" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:41.710151 kubelet[2235]: W1108 00:08:41.710029 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://46.224.11.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:41.710312 kubelet[2235]: E1108 00:08:41.710156 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://46.224.11.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:41.724166 kubelet[2235]: W1108 00:08:41.724055 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://46.224.11.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:41.724316 kubelet[2235]: E1108 00:08:41.724194 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://46.224.11.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:41.785701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1253040522.mount: Deactivated successfully. Nov 8 00:08:41.794677 containerd[1475]: time="2025-11-08T00:08:41.794604173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:41.796239 containerd[1475]: time="2025-11-08T00:08:41.796161059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269193" Nov 8 00:08:41.796748 containerd[1475]: time="2025-11-08T00:08:41.796708781Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:41.797748 containerd[1475]: time="2025-11-08T00:08:41.797710305Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:41.798635 containerd[1475]: time="2025-11-08T00:08:41.798600429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:08:41.799681 containerd[1475]: time="2025-11-08T00:08:41.799626513Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:08:41.800213 containerd[1475]: time="2025-11-08T00:08:41.800164755Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:41.804862 containerd[1475]: time="2025-11-08T00:08:41.804803693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:08:41.805905 containerd[1475]: time="2025-11-08T00:08:41.805704096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.490839ms" Nov 8 00:08:41.807952 containerd[1475]: time="2025-11-08T00:08:41.807902745Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 548.639349ms" Nov 8 00:08:41.815780 containerd[1475]: time="2025-11-08T00:08:41.815736776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 551.32788ms" Nov 8 00:08:41.893162 containerd[1475]: time="2025-11-08T00:08:41.891336712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:41.893162 containerd[1475]: time="2025-11-08T00:08:41.891409072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:41.893162 containerd[1475]: time="2025-11-08T00:08:41.891425032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:41.893162 containerd[1475]: time="2025-11-08T00:08:41.891531953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:41.901326 containerd[1475]: time="2025-11-08T00:08:41.900675189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:41.901454 containerd[1475]: time="2025-11-08T00:08:41.901340671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:41.901454 containerd[1475]: time="2025-11-08T00:08:41.901355391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:41.901509 containerd[1475]: time="2025-11-08T00:08:41.901480312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:41.907329 containerd[1475]: time="2025-11-08T00:08:41.907246854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:41.907657 containerd[1475]: time="2025-11-08T00:08:41.907510895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:41.907657 containerd[1475]: time="2025-11-08T00:08:41.907546335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:41.907890 containerd[1475]: time="2025-11-08T00:08:41.907834697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:41.925302 systemd[1]: Started cri-containerd-627b30d378eec43d9039cf0adc16269762b0d192ec0f1b71ee84647fe7d67582.scope - libcontainer container 627b30d378eec43d9039cf0adc16269762b0d192ec0f1b71ee84647fe7d67582. Nov 8 00:08:41.932416 systemd[1]: Started cri-containerd-fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda.scope - libcontainer container fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda. Nov 8 00:08:41.945281 systemd[1]: Started cri-containerd-5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e.scope - libcontainer container 5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e. Nov 8 00:08:41.988433 containerd[1475]: time="2025-11-08T00:08:41.987460648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-6-n-fb20dfd731,Uid:cb3de5340479675e9debb939b657700e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda\"" Nov 8 00:08:41.994035 containerd[1475]: time="2025-11-08T00:08:41.993988154Z" level=info msg="CreateContainer within sandbox \"fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:08:41.999978 containerd[1475]: time="2025-11-08T00:08:41.999946057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-6-n-fb20dfd731,Uid:36c200186146399d5c161eedacb58c27,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e\"" Nov 8 00:08:42.002967 containerd[1475]: time="2025-11-08T00:08:42.002699548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-6-n-fb20dfd731,Uid:f61aadfc756ae34e9a5ec4d8082bca36,Namespace:kube-system,Attempt:0,} returns sandbox id \"627b30d378eec43d9039cf0adc16269762b0d192ec0f1b71ee84647fe7d67582\"" Nov 8 00:08:42.004486 containerd[1475]: time="2025-11-08T00:08:42.004455154Z" level=info msg="CreateContainer within sandbox \"5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:08:42.006726 containerd[1475]: time="2025-11-08T00:08:42.006695162Z" level=info msg="CreateContainer within sandbox \"627b30d378eec43d9039cf0adc16269762b0d192ec0f1b71ee84647fe7d67582\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:08:42.020065 containerd[1475]: time="2025-11-08T00:08:42.019811251Z" level=info msg="CreateContainer within sandbox \"fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d\"" Nov 8 00:08:42.020864 containerd[1475]: time="2025-11-08T00:08:42.020837694Z" level=info msg="StartContainer for \"cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d\"" Nov 8 00:08:42.028742 containerd[1475]: time="2025-11-08T00:08:42.028405642Z" level=info msg="CreateContainer within sandbox \"627b30d378eec43d9039cf0adc16269762b0d192ec0f1b71ee84647fe7d67582\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0e2d42f1a9fb8b6e2b89bc61616fb0b2ef5664fcb9ef0aa77b92a7e4ea5d8384\"" Nov 8 00:08:42.029839 containerd[1475]: time="2025-11-08T00:08:42.029667087Z" level=info msg="StartContainer for \"0e2d42f1a9fb8b6e2b89bc61616fb0b2ef5664fcb9ef0aa77b92a7e4ea5d8384\"" Nov 8 00:08:42.032515 containerd[1475]: time="2025-11-08T00:08:42.032446817Z" level=info msg="CreateContainer within sandbox \"5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742\"" Nov 8 00:08:42.033190 containerd[1475]: time="2025-11-08T00:08:42.033147540Z" level=info msg="StartContainer for \"0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742\"" Nov 8 00:08:42.057337 systemd[1]: Started cri-containerd-cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d.scope - libcontainer container cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d. Nov 8 00:08:42.078512 systemd[1]: Started cri-containerd-0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742.scope - libcontainer container 0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742. Nov 8 00:08:42.081853 systemd[1]: Started cri-containerd-0e2d42f1a9fb8b6e2b89bc61616fb0b2ef5664fcb9ef0aa77b92a7e4ea5d8384.scope - libcontainer container 0e2d42f1a9fb8b6e2b89bc61616fb0b2ef5664fcb9ef0aa77b92a7e4ea5d8384. Nov 8 00:08:42.128378 containerd[1475]: time="2025-11-08T00:08:42.127315605Z" level=info msg="StartContainer for \"cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d\" returns successfully" Nov 8 00:08:42.156189 containerd[1475]: time="2025-11-08T00:08:42.153665542Z" level=info msg="StartContainer for \"0e2d42f1a9fb8b6e2b89bc61616fb0b2ef5664fcb9ef0aa77b92a7e4ea5d8384\" returns successfully" Nov 8 00:08:42.160555 containerd[1475]: time="2025-11-08T00:08:42.160518927Z" level=info msg="StartContainer for \"0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742\" returns successfully" Nov 8 00:08:42.207762 kubelet[2235]: E1108 00:08:42.207699 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.11.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-6-n-fb20dfd731?timeout=10s\": dial tcp 46.224.11.50:6443: connect: connection refused" interval="1.6s" Nov 8 00:08:42.210543 kubelet[2235]: W1108 00:08:42.210458 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://46.224.11.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fb20dfd731&limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:42.210543 kubelet[2235]: E1108 00:08:42.210518 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://46.224.11.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-6-n-fb20dfd731&limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:42.271340 kubelet[2235]: W1108 00:08:42.271233 2235 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://46.224.11.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 46.224.11.50:6443: connect: connection refused Nov 8 00:08:42.271340 kubelet[2235]: E1108 00:08:42.271308 2235 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://46.224.11.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.11.50:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:08:42.394171 kubelet[2235]: I1108 00:08:42.393775 2235 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:42.845246 kubelet[2235]: E1108 00:08:42.844971 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:42.847061 kubelet[2235]: E1108 00:08:42.846919 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:42.851361 kubelet[2235]: E1108 00:08:42.851343 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:43.855173 kubelet[2235]: E1108 00:08:43.854169 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:43.855173 kubelet[2235]: E1108 00:08:43.854469 2235 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.647145 kubelet[2235]: I1108 00:08:44.646296 2235 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.647145 kubelet[2235]: E1108 00:08:44.646340 2235 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-6-n-fb20dfd731\": node \"ci-4081-3-6-n-fb20dfd731\" not found" Nov 8 00:08:44.703022 kubelet[2235]: I1108 00:08:44.703000 2235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.755078 kubelet[2235]: E1108 00:08:44.755033 2235 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Nov 8 00:08:44.769235 kubelet[2235]: E1108 00:08:44.769190 2235 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.769540 kubelet[2235]: I1108 00:08:44.769380 2235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.779924 kubelet[2235]: E1108 00:08:44.779328 2235 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.780375 kubelet[2235]: I1108 00:08:44.780157 2235 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.784336 kubelet[2235]: E1108 00:08:44.784309 2235 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-fb20dfd731\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:44.788155 kubelet[2235]: I1108 00:08:44.787429 2235 apiserver.go:52] "Watching apiserver" Nov 8 00:08:44.801295 kubelet[2235]: I1108 00:08:44.801257 2235 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:08:46.855966 systemd[1]: Reloading requested from client PID 2512 ('systemctl') (unit session-7.scope)... Nov 8 00:08:46.856305 systemd[1]: Reloading... Nov 8 00:08:46.966161 zram_generator::config[2555]: No configuration found. Nov 8 00:08:47.066007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:08:47.153919 systemd[1]: Reloading finished in 297 ms. Nov 8 00:08:47.196275 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:47.211591 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:08:47.211918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:47.211986 systemd[1]: kubelet.service: Consumed 1.263s CPU time, 127.7M memory peak, 0B memory swap peak. Nov 8 00:08:47.222520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:08:47.353412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:08:47.353621 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:08:47.436599 kubelet[2597]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:47.436599 kubelet[2597]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:08:47.436599 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:08:47.437015 kubelet[2597]: I1108 00:08:47.436640 2597 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:08:47.447109 kubelet[2597]: I1108 00:08:47.447036 2597 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:08:47.447109 kubelet[2597]: I1108 00:08:47.447100 2597 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:08:47.447760 kubelet[2597]: I1108 00:08:47.447711 2597 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:08:47.449947 kubelet[2597]: I1108 00:08:47.449921 2597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:08:47.454162 kubelet[2597]: I1108 00:08:47.453301 2597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:08:47.458175 kubelet[2597]: E1108 00:08:47.457749 2597 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:08:47.458175 kubelet[2597]: I1108 00:08:47.457784 2597 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:08:47.460273 kubelet[2597]: I1108 00:08:47.460237 2597 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:08:47.460563 kubelet[2597]: I1108 00:08:47.460531 2597 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:08:47.460821 kubelet[2597]: I1108 00:08:47.460661 2597 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-6-n-fb20dfd731","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:08:47.460959 kubelet[2597]: I1108 00:08:47.460946 2597 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:08:47.461040 kubelet[2597]: I1108 00:08:47.461032 2597 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:08:47.461145 kubelet[2597]: I1108 00:08:47.461125 2597 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:47.461446 kubelet[2597]: I1108 00:08:47.461431 2597 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:08:47.461525 kubelet[2597]: I1108 00:08:47.461514 2597 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:08:47.461654 kubelet[2597]: I1108 00:08:47.461642 2597 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:08:47.461709 kubelet[2597]: I1108 00:08:47.461701 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:08:47.466151 kubelet[2597]: I1108 00:08:47.465516 2597 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:08:47.466242 kubelet[2597]: I1108 00:08:47.466198 2597 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:08:47.468234 kubelet[2597]: I1108 00:08:47.466675 2597 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:08:47.468234 kubelet[2597]: I1108 00:08:47.466741 2597 server.go:1287] "Started kubelet" Nov 8 00:08:47.471967 kubelet[2597]: I1108 00:08:47.469732 2597 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:08:47.481951 kubelet[2597]: I1108 00:08:47.481896 2597 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:08:47.484163 kubelet[2597]: I1108 00:08:47.482819 2597 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:08:47.484163 kubelet[2597]: I1108 00:08:47.484034 2597 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:08:47.484289 kubelet[2597]: I1108 00:08:47.484278 2597 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:08:47.484521 kubelet[2597]: I1108 00:08:47.484492 2597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:08:47.488161 kubelet[2597]: I1108 00:08:47.486049 2597 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:08:47.488161 kubelet[2597]: E1108 00:08:47.486239 2597 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-6-n-fb20dfd731\" not found" Nov 8 00:08:47.488161 kubelet[2597]: I1108 00:08:47.487779 2597 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:08:47.488161 kubelet[2597]: I1108 00:08:47.487889 2597 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:08:47.491155 kubelet[2597]: I1108 00:08:47.489397 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:08:47.491155 kubelet[2597]: I1108 00:08:47.490302 2597 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:08:47.491155 kubelet[2597]: I1108 00:08:47.490319 2597 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:08:47.491155 kubelet[2597]: I1108 00:08:47.490339 2597 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:08:47.491155 kubelet[2597]: I1108 00:08:47.490346 2597 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:08:47.491155 kubelet[2597]: E1108 00:08:47.490381 2597 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:08:47.500465 kubelet[2597]: I1108 00:08:47.500439 2597 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:08:47.501310 kubelet[2597]: I1108 00:08:47.501287 2597 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:08:47.504329 kubelet[2597]: I1108 00:08:47.504309 2597 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:08:47.565083 kubelet[2597]: I1108 00:08:47.565053 2597 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:08:47.565083 kubelet[2597]: I1108 00:08:47.565076 2597 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:08:47.565249 kubelet[2597]: I1108 00:08:47.565096 2597 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:08:47.565306 kubelet[2597]: I1108 00:08:47.565289 2597 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:08:47.565332 kubelet[2597]: I1108 00:08:47.565307 2597 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:08:47.565332 kubelet[2597]: I1108 00:08:47.565326 2597 policy_none.go:49] "None policy: Start" Nov 8 00:08:47.565382 kubelet[2597]: I1108 00:08:47.565334 2597 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:08:47.565382 kubelet[2597]: I1108 00:08:47.565344 2597 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:08:47.565445 kubelet[2597]: I1108 00:08:47.565434 2597 state_mem.go:75] "Updated machine memory state" Nov 8 00:08:47.569954 kubelet[2597]: I1108 00:08:47.569920 2597 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:08:47.570142 kubelet[2597]: I1108 00:08:47.570082 2597 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:08:47.570142 kubelet[2597]: I1108 00:08:47.570099 2597 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:08:47.571084 kubelet[2597]: I1108 00:08:47.570524 2597 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:08:47.574556 kubelet[2597]: E1108 00:08:47.574467 2597 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:08:47.593185 kubelet[2597]: I1108 00:08:47.590871 2597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.593185 kubelet[2597]: I1108 00:08:47.591316 2597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.593185 kubelet[2597]: I1108 00:08:47.591591 2597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.675069 kubelet[2597]: I1108 00:08:47.674950 2597 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690286 kubelet[2597]: I1108 00:08:47.689231 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690286 kubelet[2597]: I1108 00:08:47.689299 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690286 kubelet[2597]: I1108 00:08:47.689411 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f61aadfc756ae34e9a5ec4d8082bca36-ca-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" (UID: \"f61aadfc756ae34e9a5ec4d8082bca36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690286 kubelet[2597]: I1108 00:08:47.689444 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f61aadfc756ae34e9a5ec4d8082bca36-k8s-certs\") pod \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" (UID: \"f61aadfc756ae34e9a5ec4d8082bca36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690286 kubelet[2597]: I1108 00:08:47.689474 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690585 kubelet[2597]: I1108 00:08:47.689506 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb3de5340479675e9debb939b657700e-kubeconfig\") pod \"kube-scheduler-ci-4081-3-6-n-fb20dfd731\" (UID: \"cb3de5340479675e9debb939b657700e\") " pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690585 kubelet[2597]: I1108 00:08:47.689536 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f61aadfc756ae34e9a5ec4d8082bca36-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" (UID: \"f61aadfc756ae34e9a5ec4d8082bca36\") " pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690585 kubelet[2597]: I1108 00:08:47.689563 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-ca-certs\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690585 kubelet[2597]: I1108 00:08:47.689600 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36c200186146399d5c161eedacb58c27-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-6-n-fb20dfd731\" (UID: \"36c200186146399d5c161eedacb58c27\") " pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690585 kubelet[2597]: I1108 00:08:47.689245 2597 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:47.690585 kubelet[2597]: I1108 00:08:47.689745 2597 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:48.463451 kubelet[2597]: I1108 00:08:48.463407 2597 apiserver.go:52] "Watching apiserver" Nov 8 00:08:48.488181 kubelet[2597]: I1108 00:08:48.488073 2597 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:08:48.543347 kubelet[2597]: I1108 00:08:48.543301 2597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:48.544351 kubelet[2597]: I1108 00:08:48.544269 2597 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:48.557179 kubelet[2597]: E1108 00:08:48.556305 2597 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-6-n-fb20dfd731\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:48.559224 kubelet[2597]: E1108 00:08:48.559194 2597 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-6-n-fb20dfd731\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" Nov 8 00:08:48.597070 kubelet[2597]: I1108 00:08:48.596831 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-6-n-fb20dfd731" podStartSLOduration=1.5968106930000001 podStartE2EDuration="1.596810693s" podCreationTimestamp="2025-11-08 00:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:48.578580567 +0000 UTC m=+1.217659103" watchObservedRunningTime="2025-11-08 00:08:48.596810693 +0000 UTC m=+1.235889229" Nov 8 00:08:48.611200 kubelet[2597]: I1108 00:08:48.610183 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-6-n-fb20dfd731" podStartSLOduration=1.6101604859999998 podStartE2EDuration="1.610160486s" podCreationTimestamp="2025-11-08 00:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:48.597209974 +0000 UTC m=+1.236288470" watchObservedRunningTime="2025-11-08 00:08:48.610160486 +0000 UTC m=+1.249239022" Nov 8 00:08:52.496567 kubelet[2597]: I1108 00:08:52.495933 2597 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:08:52.499992 containerd[1475]: time="2025-11-08T00:08:52.497342440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:08:52.500314 kubelet[2597]: I1108 00:08:52.499206 2597 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:08:53.456540 kubelet[2597]: I1108 00:08:53.455206 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-6-n-fb20dfd731" podStartSLOduration=6.45517979 podStartE2EDuration="6.45517979s" podCreationTimestamp="2025-11-08 00:08:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:48.611464649 +0000 UTC m=+1.250543185" watchObservedRunningTime="2025-11-08 00:08:53.45517979 +0000 UTC m=+6.094258326" Nov 8 00:08:53.467690 systemd[1]: Created slice kubepods-besteffort-pod468c0dca_a3d6_4fcb_a4c4_91e3d417817f.slice - libcontainer container kubepods-besteffort-pod468c0dca_a3d6_4fcb_a4c4_91e3d417817f.slice. Nov 8 00:08:53.527690 kubelet[2597]: I1108 00:08:53.527560 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxc8j\" (UniqueName: \"kubernetes.io/projected/468c0dca-a3d6-4fcb-a4c4-91e3d417817f-kube-api-access-mxc8j\") pod \"kube-proxy-v4xtb\" (UID: \"468c0dca-a3d6-4fcb-a4c4-91e3d417817f\") " pod="kube-system/kube-proxy-v4xtb" Nov 8 00:08:53.527690 kubelet[2597]: I1108 00:08:53.527660 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/468c0dca-a3d6-4fcb-a4c4-91e3d417817f-kube-proxy\") pod \"kube-proxy-v4xtb\" (UID: \"468c0dca-a3d6-4fcb-a4c4-91e3d417817f\") " pod="kube-system/kube-proxy-v4xtb" Nov 8 00:08:53.527690 kubelet[2597]: I1108 00:08:53.527694 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/468c0dca-a3d6-4fcb-a4c4-91e3d417817f-xtables-lock\") pod \"kube-proxy-v4xtb\" (UID: \"468c0dca-a3d6-4fcb-a4c4-91e3d417817f\") " pod="kube-system/kube-proxy-v4xtb" Nov 8 00:08:53.532640 kubelet[2597]: I1108 00:08:53.527728 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468c0dca-a3d6-4fcb-a4c4-91e3d417817f-lib-modules\") pod \"kube-proxy-v4xtb\" (UID: \"468c0dca-a3d6-4fcb-a4c4-91e3d417817f\") " pod="kube-system/kube-proxy-v4xtb" Nov 8 00:08:53.542471 systemd[1]: Started sshd@7-46.224.11.50:22-147.139.164.196:6102.service - OpenSSH per-connection server daemon (147.139.164.196:6102). Nov 8 00:08:53.624649 kubelet[2597]: W1108 00:08:53.624573 2597 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081-3-6-n-fb20dfd731" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-6-n-fb20dfd731' and this object Nov 8 00:08:53.624835 kubelet[2597]: E1108 00:08:53.624801 2597 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081-3-6-n-fb20dfd731\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-6-n-fb20dfd731' and this object" logger="UnhandledError" Nov 8 00:08:53.625144 kubelet[2597]: W1108 00:08:53.625044 2597 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081-3-6-n-fb20dfd731" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081-3-6-n-fb20dfd731' and this object Nov 8 00:08:53.625144 kubelet[2597]: E1108 00:08:53.625066 2597 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081-3-6-n-fb20dfd731\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081-3-6-n-fb20dfd731' and this object" logger="UnhandledError" Nov 8 00:08:53.630884 kubelet[2597]: I1108 00:08:53.629231 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7059a30-c32c-4c57-b1fb-ba8dd5b79fda-var-lib-calico\") pod \"tigera-operator-7dcd859c48-75h5x\" (UID: \"d7059a30-c32c-4c57-b1fb-ba8dd5b79fda\") " pod="tigera-operator/tigera-operator-7dcd859c48-75h5x" Nov 8 00:08:53.630884 kubelet[2597]: I1108 00:08:53.629333 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlphc\" (UniqueName: \"kubernetes.io/projected/d7059a30-c32c-4c57-b1fb-ba8dd5b79fda-kube-api-access-rlphc\") pod \"tigera-operator-7dcd859c48-75h5x\" (UID: \"d7059a30-c32c-4c57-b1fb-ba8dd5b79fda\") " pod="tigera-operator/tigera-operator-7dcd859c48-75h5x" Nov 8 00:08:53.633367 systemd[1]: Created slice kubepods-besteffort-podd7059a30_c32c_4c57_b1fb_ba8dd5b79fda.slice - libcontainer container kubepods-besteffort-podd7059a30_c32c_4c57_b1fb_ba8dd5b79fda.slice. Nov 8 00:08:53.777925 containerd[1475]: time="2025-11-08T00:08:53.777227531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4xtb,Uid:468c0dca-a3d6-4fcb-a4c4-91e3d417817f,Namespace:kube-system,Attempt:0,}" Nov 8 00:08:53.802804 containerd[1475]: time="2025-11-08T00:08:53.802372217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:53.802804 containerd[1475]: time="2025-11-08T00:08:53.802467537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:53.802804 containerd[1475]: time="2025-11-08T00:08:53.802501337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:53.802804 containerd[1475]: time="2025-11-08T00:08:53.802683497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:53.819700 systemd[1]: run-containerd-runc-k8s.io-ee90667021b3f5a83fc5084954a69dd93f7e2f9837c252135915a586b3c9c5cc-runc.ZA1OQF.mount: Deactivated successfully. Nov 8 00:08:53.828324 systemd[1]: Started cri-containerd-ee90667021b3f5a83fc5084954a69dd93f7e2f9837c252135915a586b3c9c5cc.scope - libcontainer container ee90667021b3f5a83fc5084954a69dd93f7e2f9837c252135915a586b3c9c5cc. Nov 8 00:08:53.854206 containerd[1475]: time="2025-11-08T00:08:53.854169150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v4xtb,Uid:468c0dca-a3d6-4fcb-a4c4-91e3d417817f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee90667021b3f5a83fc5084954a69dd93f7e2f9837c252135915a586b3c9c5cc\"" Nov 8 00:08:53.858215 containerd[1475]: time="2025-11-08T00:08:53.858181078Z" level=info msg="CreateContainer within sandbox \"ee90667021b3f5a83fc5084954a69dd93f7e2f9837c252135915a586b3c9c5cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:08:53.871238 containerd[1475]: time="2025-11-08T00:08:53.871196741Z" level=info msg="CreateContainer within sandbox \"ee90667021b3f5a83fc5084954a69dd93f7e2f9837c252135915a586b3c9c5cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60e27dc75ed265634758bc2a366f638ad69f8237dd98a2b913536ff5464f7479\"" Nov 8 00:08:53.873090 containerd[1475]: time="2025-11-08T00:08:53.872252583Z" level=info msg="StartContainer for \"60e27dc75ed265634758bc2a366f638ad69f8237dd98a2b913536ff5464f7479\"" Nov 8 00:08:53.903455 systemd[1]: Started cri-containerd-60e27dc75ed265634758bc2a366f638ad69f8237dd98a2b913536ff5464f7479.scope - libcontainer container 60e27dc75ed265634758bc2a366f638ad69f8237dd98a2b913536ff5464f7479. Nov 8 00:08:53.933839 containerd[1475]: time="2025-11-08T00:08:53.933797094Z" level=info msg="StartContainer for \"60e27dc75ed265634758bc2a366f638ad69f8237dd98a2b913536ff5464f7479\" returns successfully" Nov 8 00:08:54.573875 kubelet[2597]: I1108 00:08:54.573702 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v4xtb" podStartSLOduration=1.573682025 podStartE2EDuration="1.573682025s" podCreationTimestamp="2025-11-08 00:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:08:54.573388744 +0000 UTC m=+7.212467280" watchObservedRunningTime="2025-11-08 00:08:54.573682025 +0000 UTC m=+7.212760561" Nov 8 00:08:54.839586 containerd[1475]: time="2025-11-08T00:08:54.839344635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-75h5x,Uid:d7059a30-c32c-4c57-b1fb-ba8dd5b79fda,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:08:54.869853 containerd[1475]: time="2025-11-08T00:08:54.869742326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:08:54.869995 containerd[1475]: time="2025-11-08T00:08:54.869880806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:08:54.870041 containerd[1475]: time="2025-11-08T00:08:54.870002246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:54.870186 containerd[1475]: time="2025-11-08T00:08:54.870153287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:08:54.895803 systemd[1]: Started cri-containerd-47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c.scope - libcontainer container 47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c. Nov 8 00:08:54.929690 containerd[1475]: time="2025-11-08T00:08:54.929640427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-75h5x,Uid:d7059a30-c32c-4c57-b1fb-ba8dd5b79fda,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c\"" Nov 8 00:08:54.932547 containerd[1475]: time="2025-11-08T00:08:54.932495072Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:08:56.463282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4248751149.mount: Deactivated successfully. Nov 8 00:08:56.833066 containerd[1475]: time="2025-11-08T00:08:56.832945773Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:56.834374 containerd[1475]: time="2025-11-08T00:08:56.834248615Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 8 00:08:56.835275 containerd[1475]: time="2025-11-08T00:08:56.835220416Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:56.838782 containerd[1475]: time="2025-11-08T00:08:56.838726302Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:08:56.840064 containerd[1475]: time="2025-11-08T00:08:56.839940983Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.907393991s" Nov 8 00:08:56.840064 containerd[1475]: time="2025-11-08T00:08:56.839974143Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 8 00:08:56.843074 containerd[1475]: time="2025-11-08T00:08:56.843039588Z" level=info msg="CreateContainer within sandbox \"47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:08:56.858399 containerd[1475]: time="2025-11-08T00:08:56.858271491Z" level=info msg="CreateContainer within sandbox \"47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364\"" Nov 8 00:08:56.859696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296533941.mount: Deactivated successfully. Nov 8 00:08:56.861024 containerd[1475]: time="2025-11-08T00:08:56.860850775Z" level=info msg="StartContainer for \"c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364\"" Nov 8 00:08:56.892369 systemd[1]: Started cri-containerd-c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364.scope - libcontainer container c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364. Nov 8 00:08:56.920245 containerd[1475]: time="2025-11-08T00:08:56.920091863Z" level=info msg="StartContainer for \"c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364\" returns successfully" Nov 8 00:08:57.599577 kubelet[2597]: I1108 00:08:57.599422 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-75h5x" podStartSLOduration=2.688957622 podStartE2EDuration="4.599394258s" podCreationTimestamp="2025-11-08 00:08:53 +0000 UTC" firstStartedPulling="2025-11-08 00:08:54.93136999 +0000 UTC m=+7.570448526" lastFinishedPulling="2025-11-08 00:08:56.841806666 +0000 UTC m=+9.480885162" observedRunningTime="2025-11-08 00:08:57.598203816 +0000 UTC m=+10.237282392" watchObservedRunningTime="2025-11-08 00:08:57.599394258 +0000 UTC m=+10.238472834" Nov 8 00:08:58.561217 sshd[2640]: kex_protocol_error: type 20 seq 2 [preauth] Nov 8 00:08:58.561217 sshd[2640]: kex_protocol_error: type 30 seq 3 [preauth] Nov 8 00:08:59.554384 sshd[2640]: kex_protocol_error: type 20 seq 4 [preauth] Nov 8 00:08:59.554384 sshd[2640]: kex_protocol_error: type 30 seq 5 [preauth] Nov 8 00:09:01.517982 sshd[2640]: kex_protocol_error: type 20 seq 6 [preauth] Nov 8 00:09:01.517982 sshd[2640]: kex_protocol_error: type 30 seq 7 [preauth] Nov 8 00:09:03.090378 sudo[1751]: pam_unix(sudo:session): session closed for user root Nov 8 00:09:03.243009 sshd[1748]: pam_unix(sshd:session): session closed for user core Nov 8 00:09:03.249702 systemd-logind[1459]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:09:03.250502 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:09:03.250839 systemd[1]: session-7.scope: Consumed 7.641s CPU time, 150.6M memory peak, 0B memory swap peak. Nov 8 00:09:03.251240 systemd[1]: sshd@6-46.224.11.50:22-139.178.68.195:47968.service: Deactivated successfully. Nov 8 00:09:03.259039 systemd-logind[1459]: Removed session 7. Nov 8 00:09:15.749478 systemd[1]: Created slice kubepods-besteffort-pod553fe920_0492_4546_8b7b_c46340544565.slice - libcontainer container kubepods-besteffort-pod553fe920_0492_4546_8b7b_c46340544565.slice. Nov 8 00:09:15.777195 kubelet[2597]: I1108 00:09:15.777048 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553fe920-0492-4546-8b7b-c46340544565-tigera-ca-bundle\") pod \"calico-typha-f45b64555-cndgq\" (UID: \"553fe920-0492-4546-8b7b-c46340544565\") " pod="calico-system/calico-typha-f45b64555-cndgq" Nov 8 00:09:15.777195 kubelet[2597]: I1108 00:09:15.777100 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/553fe920-0492-4546-8b7b-c46340544565-typha-certs\") pod \"calico-typha-f45b64555-cndgq\" (UID: \"553fe920-0492-4546-8b7b-c46340544565\") " pod="calico-system/calico-typha-f45b64555-cndgq" Nov 8 00:09:15.777195 kubelet[2597]: I1108 00:09:15.777117 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7894f\" (UniqueName: \"kubernetes.io/projected/553fe920-0492-4546-8b7b-c46340544565-kube-api-access-7894f\") pod \"calico-typha-f45b64555-cndgq\" (UID: \"553fe920-0492-4546-8b7b-c46340544565\") " pod="calico-system/calico-typha-f45b64555-cndgq" Nov 8 00:09:15.971710 systemd[1]: Created slice kubepods-besteffort-pod745418d3_aae4_43ee_8f4d_bf110223198a.slice - libcontainer container kubepods-besteffort-pod745418d3_aae4_43ee_8f4d_bf110223198a.slice. Nov 8 00:09:15.978815 kubelet[2597]: I1108 00:09:15.978767 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/745418d3-aae4-43ee-8f4d-bf110223198a-node-certs\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.978815 kubelet[2597]: I1108 00:09:15.978812 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-var-run-calico\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.978982 kubelet[2597]: I1108 00:09:15.978831 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-cni-net-dir\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.978982 kubelet[2597]: I1108 00:09:15.978847 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-xtables-lock\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.978982 kubelet[2597]: I1108 00:09:15.978864 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/745418d3-aae4-43ee-8f4d-bf110223198a-tigera-ca-bundle\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.978982 kubelet[2597]: I1108 00:09:15.978878 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-var-lib-calico\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.978982 kubelet[2597]: I1108 00:09:15.978893 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-cni-bin-dir\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.979100 kubelet[2597]: I1108 00:09:15.978906 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-policysync\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.979100 kubelet[2597]: I1108 00:09:15.978921 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-flexvol-driver-host\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.979100 kubelet[2597]: I1108 00:09:15.978943 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-lib-modules\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.979100 kubelet[2597]: I1108 00:09:15.979053 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/745418d3-aae4-43ee-8f4d-bf110223198a-cni-log-dir\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:15.979100 kubelet[2597]: I1108 00:09:15.979073 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst24\" (UniqueName: \"kubernetes.io/projected/745418d3-aae4-43ee-8f4d-bf110223198a-kube-api-access-kst24\") pod \"calico-node-8k2vx\" (UID: \"745418d3-aae4-43ee-8f4d-bf110223198a\") " pod="calico-system/calico-node-8k2vx" Nov 8 00:09:16.053963 containerd[1475]: time="2025-11-08T00:09:16.052752135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f45b64555-cndgq,Uid:553fe920-0492-4546-8b7b-c46340544565,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:16.087204 kubelet[2597]: E1108 00:09:16.086289 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.087204 kubelet[2597]: W1108 00:09:16.086317 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.087204 kubelet[2597]: E1108 00:09:16.086339 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.088398 kubelet[2597]: E1108 00:09:16.088365 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.088650 kubelet[2597]: W1108 00:09:16.088574 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.088650 kubelet[2597]: E1108 00:09:16.088603 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.093501 containerd[1475]: time="2025-11-08T00:09:16.093051551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:16.093501 containerd[1475]: time="2025-11-08T00:09:16.093107111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:16.097073 containerd[1475]: time="2025-11-08T00:09:16.093912271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:16.097073 containerd[1475]: time="2025-11-08T00:09:16.094073872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:16.111151 kubelet[2597]: E1108 00:09:16.110880 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.111151 kubelet[2597]: W1108 00:09:16.111023 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.111151 kubelet[2597]: E1108 00:09:16.111049 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.133570 systemd[1]: Started cri-containerd-90f85953cc112ee5934c4f52246daa10bed25c801a015cac4aee81cf9dce840e.scope - libcontainer container 90f85953cc112ee5934c4f52246daa10bed25c801a015cac4aee81cf9dce840e. Nov 8 00:09:16.183171 kubelet[2597]: E1108 00:09:16.182620 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:16.223350 containerd[1475]: time="2025-11-08T00:09:16.223292884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f45b64555-cndgq,Uid:553fe920-0492-4546-8b7b-c46340544565,Namespace:calico-system,Attempt:0,} returns sandbox id \"90f85953cc112ee5934c4f52246daa10bed25c801a015cac4aee81cf9dce840e\"" Nov 8 00:09:16.226522 containerd[1475]: time="2025-11-08T00:09:16.226165566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:09:16.267273 kubelet[2597]: E1108 00:09:16.267236 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.267273 kubelet[2597]: W1108 00:09:16.267263 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.267566 kubelet[2597]: E1108 00:09:16.267286 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.267664 kubelet[2597]: E1108 00:09:16.267647 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.267711 kubelet[2597]: W1108 00:09:16.267661 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.267737 kubelet[2597]: E1108 00:09:16.267714 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.268100 kubelet[2597]: E1108 00:09:16.268081 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.268100 kubelet[2597]: W1108 00:09:16.268094 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.268232 kubelet[2597]: E1108 00:09:16.268116 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.268406 kubelet[2597]: E1108 00:09:16.268388 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.268406 kubelet[2597]: W1108 00:09:16.268402 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.268530 kubelet[2597]: E1108 00:09:16.268413 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.269282 kubelet[2597]: E1108 00:09:16.269253 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.269389 kubelet[2597]: W1108 00:09:16.269367 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.269422 kubelet[2597]: E1108 00:09:16.269391 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.269749 kubelet[2597]: E1108 00:09:16.269729 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.269749 kubelet[2597]: W1108 00:09:16.269744 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.269868 kubelet[2597]: E1108 00:09:16.269761 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.270151 kubelet[2597]: E1108 00:09:16.270116 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.270256 kubelet[2597]: W1108 00:09:16.270228 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.270256 kubelet[2597]: E1108 00:09:16.270252 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.270660 kubelet[2597]: E1108 00:09:16.270638 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.270660 kubelet[2597]: W1108 00:09:16.270655 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.270752 kubelet[2597]: E1108 00:09:16.270667 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.271306 kubelet[2597]: E1108 00:09:16.271117 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.271373 kubelet[2597]: W1108 00:09:16.271303 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.271373 kubelet[2597]: E1108 00:09:16.271335 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.271898 kubelet[2597]: E1108 00:09:16.271878 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.271898 kubelet[2597]: W1108 00:09:16.271892 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.272665 kubelet[2597]: E1108 00:09:16.271903 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.272856 kubelet[2597]: E1108 00:09:16.272821 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.272912 kubelet[2597]: W1108 00:09:16.272838 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.272948 kubelet[2597]: E1108 00:09:16.272911 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.273346 kubelet[2597]: E1108 00:09:16.273326 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.273346 kubelet[2597]: W1108 00:09:16.273342 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.273435 kubelet[2597]: E1108 00:09:16.273354 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.274421 kubelet[2597]: E1108 00:09:16.274399 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.274421 kubelet[2597]: W1108 00:09:16.274415 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.274621 kubelet[2597]: E1108 00:09:16.274427 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.274710 kubelet[2597]: E1108 00:09:16.274694 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.274710 kubelet[2597]: W1108 00:09:16.274707 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.274782 kubelet[2597]: E1108 00:09:16.274720 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.275415 kubelet[2597]: E1108 00:09:16.275397 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.275415 kubelet[2597]: W1108 00:09:16.275412 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.275534 kubelet[2597]: E1108 00:09:16.275446 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.275716 kubelet[2597]: E1108 00:09:16.275700 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.275716 kubelet[2597]: W1108 00:09:16.275714 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.275813 kubelet[2597]: E1108 00:09:16.275725 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.276006 kubelet[2597]: E1108 00:09:16.275992 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.276006 kubelet[2597]: W1108 00:09:16.276005 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.276068 kubelet[2597]: E1108 00:09:16.276016 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.277052 kubelet[2597]: E1108 00:09:16.277029 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.277052 kubelet[2597]: W1108 00:09:16.277046 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.278363 kubelet[2597]: E1108 00:09:16.277062 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.278477 kubelet[2597]: E1108 00:09:16.278458 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.278477 kubelet[2597]: W1108 00:09:16.278475 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.278582 kubelet[2597]: E1108 00:09:16.278528 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.279040 kubelet[2597]: E1108 00:09:16.279017 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.279040 kubelet[2597]: W1108 00:09:16.279032 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.279040 kubelet[2597]: E1108 00:09:16.279044 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.280570 containerd[1475]: time="2025-11-08T00:09:16.280527668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8k2vx,Uid:745418d3-aae4-43ee-8f4d-bf110223198a,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:16.281965 kubelet[2597]: E1108 00:09:16.281940 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.282083 kubelet[2597]: W1108 00:09:16.281962 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.282120 kubelet[2597]: E1108 00:09:16.282086 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.282167 kubelet[2597]: I1108 00:09:16.282115 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8da5e4ab-3d6f-46a3-91d8-e794f2481a0b-varrun\") pod \"csi-node-driver-l4z57\" (UID: \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\") " pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:16.282701 kubelet[2597]: E1108 00:09:16.282562 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.282701 kubelet[2597]: W1108 00:09:16.282581 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.282701 kubelet[2597]: E1108 00:09:16.282609 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.283275 kubelet[2597]: E1108 00:09:16.283051 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.283275 kubelet[2597]: W1108 00:09:16.283085 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.283275 kubelet[2597]: E1108 00:09:16.283097 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.283806 kubelet[2597]: E1108 00:09:16.283562 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.283806 kubelet[2597]: W1108 00:09:16.283585 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.283806 kubelet[2597]: E1108 00:09:16.283597 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.283806 kubelet[2597]: I1108 00:09:16.283618 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8da5e4ab-3d6f-46a3-91d8-e794f2481a0b-registration-dir\") pod \"csi-node-driver-l4z57\" (UID: \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\") " pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:16.285472 kubelet[2597]: E1108 00:09:16.285276 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.285472 kubelet[2597]: W1108 00:09:16.285300 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.285472 kubelet[2597]: E1108 00:09:16.285334 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.285472 kubelet[2597]: I1108 00:09:16.285360 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8da5e4ab-3d6f-46a3-91d8-e794f2481a0b-socket-dir\") pod \"csi-node-driver-l4z57\" (UID: \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\") " pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:16.286122 kubelet[2597]: E1108 00:09:16.285855 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.286122 kubelet[2597]: W1108 00:09:16.285878 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.286122 kubelet[2597]: E1108 00:09:16.286018 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.286122 kubelet[2597]: E1108 00:09:16.286050 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.286122 kubelet[2597]: I1108 00:09:16.286052 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfc2h\" (UniqueName: \"kubernetes.io/projected/8da5e4ab-3d6f-46a3-91d8-e794f2481a0b-kube-api-access-gfc2h\") pod \"csi-node-driver-l4z57\" (UID: \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\") " pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:16.286122 kubelet[2597]: W1108 00:09:16.286058 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.286122 kubelet[2597]: E1108 00:09:16.286082 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.286668 kubelet[2597]: E1108 00:09:16.286454 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.286668 kubelet[2597]: W1108 00:09:16.286468 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.286668 kubelet[2597]: E1108 00:09:16.286509 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.286875 kubelet[2597]: E1108 00:09:16.286774 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.286875 kubelet[2597]: W1108 00:09:16.286786 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.287024 kubelet[2597]: E1108 00:09:16.286923 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.287024 kubelet[2597]: I1108 00:09:16.286947 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8da5e4ab-3d6f-46a3-91d8-e794f2481a0b-kubelet-dir\") pod \"csi-node-driver-l4z57\" (UID: \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\") " pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:16.287390 kubelet[2597]: E1108 00:09:16.287295 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.287390 kubelet[2597]: W1108 00:09:16.287313 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.287390 kubelet[2597]: E1108 00:09:16.287326 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.287776 kubelet[2597]: E1108 00:09:16.287673 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.287776 kubelet[2597]: W1108 00:09:16.287687 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.287776 kubelet[2597]: E1108 00:09:16.287700 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.288418 kubelet[2597]: E1108 00:09:16.288245 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.288418 kubelet[2597]: W1108 00:09:16.288261 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.288418 kubelet[2597]: E1108 00:09:16.288398 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.289935 kubelet[2597]: E1108 00:09:16.289906 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.289935 kubelet[2597]: W1108 00:09:16.289925 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.290114 kubelet[2597]: E1108 00:09:16.289940 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.290768 kubelet[2597]: E1108 00:09:16.290729 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.290768 kubelet[2597]: W1108 00:09:16.290755 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.290768 kubelet[2597]: E1108 00:09:16.290772 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.291787 kubelet[2597]: E1108 00:09:16.291765 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.291787 kubelet[2597]: W1108 00:09:16.291783 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.291898 kubelet[2597]: E1108 00:09:16.291798 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.326317 containerd[1475]: time="2025-11-08T00:09:16.321949085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:16.326317 containerd[1475]: time="2025-11-08T00:09:16.322037725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:16.326317 containerd[1475]: time="2025-11-08T00:09:16.322063925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:16.326317 containerd[1475]: time="2025-11-08T00:09:16.322241645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:16.348337 systemd[1]: Started cri-containerd-1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92.scope - libcontainer container 1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92. Nov 8 00:09:16.389013 kubelet[2597]: E1108 00:09:16.388975 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.389013 kubelet[2597]: W1108 00:09:16.388999 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.389013 kubelet[2597]: E1108 00:09:16.389021 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.389841 kubelet[2597]: E1108 00:09:16.389816 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.389841 kubelet[2597]: W1108 00:09:16.389836 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.389930 kubelet[2597]: E1108 00:09:16.389886 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.390189 kubelet[2597]: E1108 00:09:16.390168 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.390189 kubelet[2597]: W1108 00:09:16.390185 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.390284 kubelet[2597]: E1108 00:09:16.390200 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.390918 kubelet[2597]: E1108 00:09:16.390508 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.390918 kubelet[2597]: W1108 00:09:16.390525 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.390918 kubelet[2597]: E1108 00:09:16.390613 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.393081 kubelet[2597]: E1108 00:09:16.392666 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.393081 kubelet[2597]: W1108 00:09:16.392683 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.393081 kubelet[2597]: E1108 00:09:16.392696 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.393337 kubelet[2597]: E1108 00:09:16.393124 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.393337 kubelet[2597]: W1108 00:09:16.393278 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.393337 kubelet[2597]: E1108 00:09:16.393292 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.395459 kubelet[2597]: E1108 00:09:16.393735 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.395459 kubelet[2597]: W1108 00:09:16.393753 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.395459 kubelet[2597]: E1108 00:09:16.393792 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.395459 kubelet[2597]: E1108 00:09:16.394000 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.395459 kubelet[2597]: W1108 00:09:16.394008 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.395459 kubelet[2597]: E1108 00:09:16.394122 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.395459 kubelet[2597]: E1108 00:09:16.394639 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.395459 kubelet[2597]: W1108 00:09:16.394651 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.395459 kubelet[2597]: E1108 00:09:16.394663 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.396571 kubelet[2597]: E1108 00:09:16.396543 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.396571 kubelet[2597]: W1108 00:09:16.396565 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.398725 kubelet[2597]: E1108 00:09:16.398692 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.399305 kubelet[2597]: E1108 00:09:16.398636 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.399958 kubelet[2597]: W1108 00:09:16.399408 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.399958 kubelet[2597]: E1108 00:09:16.399671 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.402276 kubelet[2597]: E1108 00:09:16.402069 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.402276 kubelet[2597]: W1108 00:09:16.402252 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.402649 kubelet[2597]: E1108 00:09:16.402606 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.402954 kubelet[2597]: E1108 00:09:16.402933 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.402954 kubelet[2597]: W1108 00:09:16.402953 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.404063 containerd[1475]: time="2025-11-08T00:09:16.403859358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8k2vx,Uid:745418d3-aae4-43ee-8f4d-bf110223198a,Namespace:calico-system,Attempt:0,} returns sandbox id \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\"" Nov 8 00:09:16.405634 kubelet[2597]: E1108 00:09:16.404916 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.405780 kubelet[2597]: E1108 00:09:16.405749 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.405780 kubelet[2597]: W1108 00:09:16.405777 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.406294 kubelet[2597]: E1108 00:09:16.406256 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.406584 kubelet[2597]: E1108 00:09:16.406562 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.406633 kubelet[2597]: W1108 00:09:16.406589 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.406893 kubelet[2597]: E1108 00:09:16.406669 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.407048 kubelet[2597]: E1108 00:09:16.406948 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.407090 kubelet[2597]: W1108 00:09:16.406964 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.407684 kubelet[2597]: E1108 00:09:16.407657 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.407908 kubelet[2597]: W1108 00:09:16.407677 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.407908 kubelet[2597]: E1108 00:09:16.407799 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.409496 kubelet[2597]: E1108 00:09:16.409407 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.409496 kubelet[2597]: W1108 00:09:16.409426 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.409825 kubelet[2597]: E1108 00:09:16.409509 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.410190 kubelet[2597]: E1108 00:09:16.410120 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.410190 kubelet[2597]: W1108 00:09:16.410189 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.410280 kubelet[2597]: E1108 00:09:16.410205 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.410315 kubelet[2597]: E1108 00:09:16.410304 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.411527 kubelet[2597]: E1108 00:09:16.411200 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.411527 kubelet[2597]: W1108 00:09:16.411220 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.411527 kubelet[2597]: E1108 00:09:16.411234 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.411900 kubelet[2597]: E1108 00:09:16.411882 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.411900 kubelet[2597]: W1108 00:09:16.411898 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.412028 kubelet[2597]: E1108 00:09:16.412012 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.412792 kubelet[2597]: E1108 00:09:16.412771 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.412792 kubelet[2597]: W1108 00:09:16.412792 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.412928 kubelet[2597]: E1108 00:09:16.412887 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.413442 kubelet[2597]: E1108 00:09:16.413421 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.413442 kubelet[2597]: W1108 00:09:16.413441 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.413560 kubelet[2597]: E1108 00:09:16.413455 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.414263 kubelet[2597]: E1108 00:09:16.414242 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.414319 kubelet[2597]: W1108 00:09:16.414272 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.414319 kubelet[2597]: E1108 00:09:16.414289 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.414967 kubelet[2597]: E1108 00:09:16.414940 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.414967 kubelet[2597]: W1108 00:09:16.414964 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.415092 kubelet[2597]: E1108 00:09:16.414979 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:16.424991 kubelet[2597]: E1108 00:09:16.424951 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:16.424991 kubelet[2597]: W1108 00:09:16.424975 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:16.425120 kubelet[2597]: E1108 00:09:16.424997 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:17.492871 kubelet[2597]: E1108 00:09:17.491230 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:17.679555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038970639.mount: Deactivated successfully. Nov 8 00:09:18.140602 containerd[1475]: time="2025-11-08T00:09:18.140552877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:18.142654 containerd[1475]: time="2025-11-08T00:09:18.141290077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 8 00:09:18.142654 containerd[1475]: time="2025-11-08T00:09:18.142544157Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:18.144994 containerd[1475]: time="2025-11-08T00:09:18.144958198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:18.146485 containerd[1475]: time="2025-11-08T00:09:18.146426919Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.920217193s" Nov 8 00:09:18.146485 containerd[1475]: time="2025-11-08T00:09:18.146466919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 8 00:09:18.148403 containerd[1475]: time="2025-11-08T00:09:18.148370879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:09:18.164695 containerd[1475]: time="2025-11-08T00:09:18.164508645Z" level=info msg="CreateContainer within sandbox \"90f85953cc112ee5934c4f52246daa10bed25c801a015cac4aee81cf9dce840e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:09:18.188062 containerd[1475]: time="2025-11-08T00:09:18.187932974Z" level=info msg="CreateContainer within sandbox \"90f85953cc112ee5934c4f52246daa10bed25c801a015cac4aee81cf9dce840e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7ea64131ea0fd801dd31d0a37d2bec7b40de68aaa891232a86c6e74954cf2540\"" Nov 8 00:09:18.191000 containerd[1475]: time="2025-11-08T00:09:18.190968535Z" level=info msg="StartContainer for \"7ea64131ea0fd801dd31d0a37d2bec7b40de68aaa891232a86c6e74954cf2540\"" Nov 8 00:09:18.221377 systemd[1]: Started cri-containerd-7ea64131ea0fd801dd31d0a37d2bec7b40de68aaa891232a86c6e74954cf2540.scope - libcontainer container 7ea64131ea0fd801dd31d0a37d2bec7b40de68aaa891232a86c6e74954cf2540. Nov 8 00:09:18.259906 containerd[1475]: time="2025-11-08T00:09:18.259670799Z" level=info msg="StartContainer for \"7ea64131ea0fd801dd31d0a37d2bec7b40de68aaa891232a86c6e74954cf2540\" returns successfully" Nov 8 00:09:18.698638 kubelet[2597]: E1108 00:09:18.698098 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.699328 kubelet[2597]: W1108 00:09:18.699032 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.699328 kubelet[2597]: E1108 00:09:18.699067 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.700277 kubelet[2597]: E1108 00:09:18.700237 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.700754 kubelet[2597]: W1108 00:09:18.700387 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.700754 kubelet[2597]: E1108 00:09:18.700514 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.701107 kubelet[2597]: E1108 00:09:18.700905 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.701286 kubelet[2597]: W1108 00:09:18.701172 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.701286 kubelet[2597]: E1108 00:09:18.701195 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.701682 kubelet[2597]: E1108 00:09:18.701570 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.701682 kubelet[2597]: W1108 00:09:18.701593 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.701682 kubelet[2597]: E1108 00:09:18.701605 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.702248 kubelet[2597]: E1108 00:09:18.702111 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.702248 kubelet[2597]: W1108 00:09:18.702124 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.702248 kubelet[2597]: E1108 00:09:18.702197 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.702661 kubelet[2597]: E1108 00:09:18.702566 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.702661 kubelet[2597]: W1108 00:09:18.702579 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.702661 kubelet[2597]: E1108 00:09:18.702590 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.702943 kubelet[2597]: E1108 00:09:18.702804 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.702943 kubelet[2597]: W1108 00:09:18.702813 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.702943 kubelet[2597]: E1108 00:09:18.702824 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.703138 kubelet[2597]: E1108 00:09:18.703115 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.703329 kubelet[2597]: W1108 00:09:18.703163 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.703329 kubelet[2597]: E1108 00:09:18.703177 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.703689 kubelet[2597]: E1108 00:09:18.703580 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.703689 kubelet[2597]: W1108 00:09:18.703598 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.703689 kubelet[2597]: E1108 00:09:18.703610 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.703868 kubelet[2597]: E1108 00:09:18.703855 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.703930 kubelet[2597]: W1108 00:09:18.703918 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.703985 kubelet[2597]: E1108 00:09:18.703975 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.704355 kubelet[2597]: E1108 00:09:18.704255 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.704355 kubelet[2597]: W1108 00:09:18.704269 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.704355 kubelet[2597]: E1108 00:09:18.704281 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.707220 kubelet[2597]: E1108 00:09:18.707194 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.707516 kubelet[2597]: W1108 00:09:18.707347 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.707516 kubelet[2597]: E1108 00:09:18.707375 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.707790 kubelet[2597]: E1108 00:09:18.707669 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.707790 kubelet[2597]: W1108 00:09:18.707682 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.707790 kubelet[2597]: E1108 00:09:18.707693 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.707959 kubelet[2597]: E1108 00:09:18.707947 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.708063 kubelet[2597]: W1108 00:09:18.708049 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.708312 kubelet[2597]: E1108 00:09:18.708208 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.708457 kubelet[2597]: E1108 00:09:18.708445 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.708656 kubelet[2597]: W1108 00:09:18.708559 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.708656 kubelet[2597]: E1108 00:09:18.708577 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.722954 kubelet[2597]: E1108 00:09:18.722923 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.722954 kubelet[2597]: W1108 00:09:18.722946 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.723218 kubelet[2597]: E1108 00:09:18.722967 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.723218 kubelet[2597]: E1108 00:09:18.723194 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.723218 kubelet[2597]: W1108 00:09:18.723203 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.723218 kubelet[2597]: E1108 00:09:18.723213 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.723387 kubelet[2597]: E1108 00:09:18.723365 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.723387 kubelet[2597]: W1108 00:09:18.723379 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.723447 kubelet[2597]: E1108 00:09:18.723390 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.723651 kubelet[2597]: E1108 00:09:18.723637 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.723651 kubelet[2597]: W1108 00:09:18.723649 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.723746 kubelet[2597]: E1108 00:09:18.723663 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.723842 kubelet[2597]: E1108 00:09:18.723831 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.723842 kubelet[2597]: W1108 00:09:18.723841 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.724004 kubelet[2597]: E1108 00:09:18.723855 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.724093 kubelet[2597]: E1108 00:09:18.724081 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.724158 kubelet[2597]: W1108 00:09:18.724093 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.724158 kubelet[2597]: E1108 00:09:18.724107 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.724350 kubelet[2597]: E1108 00:09:18.724317 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.724350 kubelet[2597]: W1108 00:09:18.724326 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.724749 kubelet[2597]: E1108 00:09:18.724436 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.724749 kubelet[2597]: E1108 00:09:18.724453 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.724749 kubelet[2597]: W1108 00:09:18.724646 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.724895 kubelet[2597]: E1108 00:09:18.724880 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.725025 kubelet[2597]: E1108 00:09:18.725014 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.725093 kubelet[2597]: W1108 00:09:18.725081 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.725375 kubelet[2597]: E1108 00:09:18.725281 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.725503 kubelet[2597]: E1108 00:09:18.725488 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.725503 kubelet[2597]: W1108 00:09:18.725500 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.725569 kubelet[2597]: E1108 00:09:18.725515 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.725704 kubelet[2597]: E1108 00:09:18.725692 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.725738 kubelet[2597]: W1108 00:09:18.725704 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.725738 kubelet[2597]: E1108 00:09:18.725724 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.725889 kubelet[2597]: E1108 00:09:18.725878 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.725889 kubelet[2597]: W1108 00:09:18.725889 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.725949 kubelet[2597]: E1108 00:09:18.725906 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.726163 kubelet[2597]: E1108 00:09:18.726124 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.726163 kubelet[2597]: W1108 00:09:18.726162 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.726245 kubelet[2597]: E1108 00:09:18.726182 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.726584 kubelet[2597]: E1108 00:09:18.726545 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.726584 kubelet[2597]: W1108 00:09:18.726564 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.726584 kubelet[2597]: E1108 00:09:18.726581 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.726807 kubelet[2597]: E1108 00:09:18.726794 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.726843 kubelet[2597]: W1108 00:09:18.726807 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.726872 kubelet[2597]: E1108 00:09:18.726842 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.727087 kubelet[2597]: E1108 00:09:18.727074 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.727087 kubelet[2597]: W1108 00:09:18.727087 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.727338 kubelet[2597]: E1108 00:09:18.727277 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.727718 kubelet[2597]: E1108 00:09:18.727678 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.727718 kubelet[2597]: W1108 00:09:18.727699 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.727718 kubelet[2597]: E1108 00:09:18.727718 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:18.727950 kubelet[2597]: E1108 00:09:18.727933 2597 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:09:18.727983 kubelet[2597]: W1108 00:09:18.727950 2597 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:09:18.727983 kubelet[2597]: E1108 00:09:18.727960 2597 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:09:19.493367 kubelet[2597]: E1108 00:09:19.493330 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:19.530656 containerd[1475]: time="2025-11-08T00:09:19.529917244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:19.531747 containerd[1475]: time="2025-11-08T00:09:19.531718565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 8 00:09:19.532688 containerd[1475]: time="2025-11-08T00:09:19.532656325Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:19.535223 containerd[1475]: time="2025-11-08T00:09:19.535193886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:19.536606 containerd[1475]: time="2025-11-08T00:09:19.536573927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.387736127s" Nov 8 00:09:19.536786 containerd[1475]: time="2025-11-08T00:09:19.536681767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 8 00:09:19.539908 containerd[1475]: time="2025-11-08T00:09:19.539537808Z" level=info msg="CreateContainer within sandbox \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:09:19.560081 containerd[1475]: time="2025-11-08T00:09:19.559902654Z" level=info msg="CreateContainer within sandbox \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027\"" Nov 8 00:09:19.562227 containerd[1475]: time="2025-11-08T00:09:19.560864175Z" level=info msg="StartContainer for \"2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027\"" Nov 8 00:09:19.597567 systemd[1]: Started cri-containerd-2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027.scope - libcontainer container 2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027. Nov 8 00:09:19.635515 containerd[1475]: time="2025-11-08T00:09:19.635454160Z" level=info msg="StartContainer for \"2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027\" returns successfully" Nov 8 00:09:19.637957 kubelet[2597]: I1108 00:09:19.637731 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:19.653006 systemd[1]: cri-containerd-2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027.scope: Deactivated successfully. Nov 8 00:09:19.772210 containerd[1475]: time="2025-11-08T00:09:19.771988846Z" level=info msg="shim disconnected" id=2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027 namespace=k8s.io Nov 8 00:09:19.772210 containerd[1475]: time="2025-11-08T00:09:19.772045366Z" level=warning msg="cleaning up after shim disconnected" id=2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027 namespace=k8s.io Nov 8 00:09:19.772210 containerd[1475]: time="2025-11-08T00:09:19.772054966Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:09:20.160209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ab3f07946a6f9ca95c135805c1fdd6371b95ff841291ee832e3930400c7e027-rootfs.mount: Deactivated successfully. Nov 8 00:09:20.646402 containerd[1475]: time="2025-11-08T00:09:20.646313167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:09:20.672829 kubelet[2597]: I1108 00:09:20.670847 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f45b64555-cndgq" podStartSLOduration=3.748750981 podStartE2EDuration="5.670828375s" podCreationTimestamp="2025-11-08 00:09:15 +0000 UTC" firstStartedPulling="2025-11-08 00:09:16.225874085 +0000 UTC m=+28.864952581" lastFinishedPulling="2025-11-08 00:09:18.147951479 +0000 UTC m=+30.787029975" observedRunningTime="2025-11-08 00:09:18.675664389 +0000 UTC m=+31.314743045" watchObservedRunningTime="2025-11-08 00:09:20.670828375 +0000 UTC m=+33.309906871" Nov 8 00:09:21.492675 kubelet[2597]: E1108 00:09:21.490905 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:23.279579 containerd[1475]: time="2025-11-08T00:09:23.279502846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:23.281059 containerd[1475]: time="2025-11-08T00:09:23.280907807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 8 00:09:23.283163 containerd[1475]: time="2025-11-08T00:09:23.282430927Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:23.287411 containerd[1475]: time="2025-11-08T00:09:23.287371608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:23.292187 containerd[1475]: time="2025-11-08T00:09:23.292114569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.645762562s" Nov 8 00:09:23.292187 containerd[1475]: time="2025-11-08T00:09:23.292175530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 8 00:09:23.295406 containerd[1475]: time="2025-11-08T00:09:23.295354170Z" level=info msg="CreateContainer within sandbox \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:09:23.312230 containerd[1475]: time="2025-11-08T00:09:23.311890655Z" level=info msg="CreateContainer within sandbox \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911\"" Nov 8 00:09:23.314151 containerd[1475]: time="2025-11-08T00:09:23.312433255Z" level=info msg="StartContainer for \"ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911\"" Nov 8 00:09:23.356384 systemd[1]: Started cri-containerd-ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911.scope - libcontainer container ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911. Nov 8 00:09:23.397674 containerd[1475]: time="2025-11-08T00:09:23.397534957Z" level=info msg="StartContainer for \"ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911\" returns successfully" Nov 8 00:09:23.493181 kubelet[2597]: E1108 00:09:23.493104 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:23.938632 containerd[1475]: time="2025-11-08T00:09:23.938563738Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:09:23.941318 systemd[1]: cri-containerd-ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911.scope: Deactivated successfully. Nov 8 00:09:24.000412 kubelet[2597]: I1108 00:09:23.998655 2597 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:09:24.045216 containerd[1475]: time="2025-11-08T00:09:24.045147245Z" level=info msg="shim disconnected" id=ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911 namespace=k8s.io Nov 8 00:09:24.045459 containerd[1475]: time="2025-11-08T00:09:24.045424005Z" level=warning msg="cleaning up after shim disconnected" id=ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911 namespace=k8s.io Nov 8 00:09:24.045545 containerd[1475]: time="2025-11-08T00:09:24.045527645Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:09:24.067488 systemd[1]: Created slice kubepods-burstable-podea25be93_453f_48e1_b6ef_e630315dd03d.slice - libcontainer container kubepods-burstable-podea25be93_453f_48e1_b6ef_e630315dd03d.slice. Nov 8 00:09:24.090289 systemd[1]: Created slice kubepods-burstable-pod5c9f9ae4_8fa2_4cf9_9f3d_b855fdc3f819.slice - libcontainer container kubepods-burstable-pod5c9f9ae4_8fa2_4cf9_9f3d_b855fdc3f819.slice. Nov 8 00:09:24.118096 systemd[1]: Created slice kubepods-besteffort-podd5333c3e_0b85_42ef_9987_96412959a46c.slice - libcontainer container kubepods-besteffort-podd5333c3e_0b85_42ef_9987_96412959a46c.slice. Nov 8 00:09:24.130592 systemd[1]: Created slice kubepods-besteffort-podfa21b1b5_7514_4555_9229_a01439384fd8.slice - libcontainer container kubepods-besteffort-podfa21b1b5_7514_4555_9229_a01439384fd8.slice. Nov 8 00:09:24.140558 systemd[1]: Created slice kubepods-besteffort-podcc3077c0_ce1f_45f6_b97f_fbfd9a4135ec.slice - libcontainer container kubepods-besteffort-podcc3077c0_ce1f_45f6_b97f_fbfd9a4135ec.slice. Nov 8 00:09:24.152351 systemd[1]: Created slice kubepods-besteffort-podbc4e8de7_6ddd_43cb_ba37_33083ff72076.slice - libcontainer container kubepods-besteffort-podbc4e8de7_6ddd_43cb_ba37_33083ff72076.slice. Nov 8 00:09:24.159222 systemd[1]: Created slice kubepods-besteffort-pod8c8208d0_d52c_4948_a7c4_1a012578a167.slice - libcontainer container kubepods-besteffort-pod8c8208d0_d52c_4948_a7c4_1a012578a167.slice. Nov 8 00:09:24.163607 kubelet[2597]: I1108 00:09:24.163566 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec-calico-apiserver-certs\") pod \"calico-apiserver-5b79fdfd8b-4hxxc\" (UID: \"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec\") " pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" Nov 8 00:09:24.163866 kubelet[2597]: I1108 00:09:24.163839 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-ca-bundle\") pod \"whisker-7549cdfd84-tb95c\" (UID: \"fa21b1b5-7514-4555-9229-a01439384fd8\") " pod="calico-system/whisker-7549cdfd84-tb95c" Nov 8 00:09:24.163974 kubelet[2597]: I1108 00:09:24.163960 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hfm6\" (UniqueName: \"kubernetes.io/projected/bc4e8de7-6ddd-43cb-ba37-33083ff72076-kube-api-access-5hfm6\") pod \"calico-apiserver-5b79fdfd8b-6tnl5\" (UID: \"bc4e8de7-6ddd-43cb-ba37-33083ff72076\") " pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" Nov 8 00:09:24.164115 kubelet[2597]: I1108 00:09:24.164050 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819-config-volume\") pod \"coredns-668d6bf9bc-xs8fd\" (UID: \"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819\") " pod="kube-system/coredns-668d6bf9bc-xs8fd" Nov 8 00:09:24.164254 kubelet[2597]: I1108 00:09:24.164072 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghfl\" (UniqueName: \"kubernetes.io/projected/5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819-kube-api-access-xghfl\") pod \"coredns-668d6bf9bc-xs8fd\" (UID: \"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819\") " pod="kube-system/coredns-668d6bf9bc-xs8fd" Nov 8 00:09:24.164368 kubelet[2597]: I1108 00:09:24.164315 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea25be93-453f-48e1-b6ef-e630315dd03d-config-volume\") pod \"coredns-668d6bf9bc-ssphr\" (UID: \"ea25be93-453f-48e1-b6ef-e630315dd03d\") " pod="kube-system/coredns-668d6bf9bc-ssphr" Nov 8 00:09:24.164368 kubelet[2597]: I1108 00:09:24.164344 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9gf9\" (UniqueName: \"kubernetes.io/projected/d5333c3e-0b85-42ef-9987-96412959a46c-kube-api-access-r9gf9\") pod \"calico-kube-controllers-7c4f66b45d-rgxhv\" (UID: \"d5333c3e-0b85-42ef-9987-96412959a46c\") " pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" Nov 8 00:09:24.164750 kubelet[2597]: I1108 00:09:24.164628 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5333c3e-0b85-42ef-9987-96412959a46c-tigera-ca-bundle\") pod \"calico-kube-controllers-7c4f66b45d-rgxhv\" (UID: \"d5333c3e-0b85-42ef-9987-96412959a46c\") " pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" Nov 8 00:09:24.164750 kubelet[2597]: I1108 00:09:24.164686 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc4e8de7-6ddd-43cb-ba37-33083ff72076-calico-apiserver-certs\") pod \"calico-apiserver-5b79fdfd8b-6tnl5\" (UID: \"bc4e8de7-6ddd-43cb-ba37-33083ff72076\") " pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" Nov 8 00:09:24.164750 kubelet[2597]: I1108 00:09:24.164710 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8208d0-d52c-4948-a7c4-1a012578a167-config\") pod \"goldmane-666569f655-c6ph6\" (UID: \"8c8208d0-d52c-4948-a7c4-1a012578a167\") " pod="calico-system/goldmane-666569f655-c6ph6" Nov 8 00:09:24.165163 kubelet[2597]: I1108 00:09:24.164726 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb7sd\" (UniqueName: \"kubernetes.io/projected/8c8208d0-d52c-4948-a7c4-1a012578a167-kube-api-access-bb7sd\") pod \"goldmane-666569f655-c6ph6\" (UID: \"8c8208d0-d52c-4948-a7c4-1a012578a167\") " pod="calico-system/goldmane-666569f655-c6ph6" Nov 8 00:09:24.165163 kubelet[2597]: I1108 00:09:24.164939 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8c8208d0-d52c-4948-a7c4-1a012578a167-goldmane-key-pair\") pod \"goldmane-666569f655-c6ph6\" (UID: \"8c8208d0-d52c-4948-a7c4-1a012578a167\") " pod="calico-system/goldmane-666569f655-c6ph6" Nov 8 00:09:24.165163 kubelet[2597]: I1108 00:09:24.164958 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmrm\" (UniqueName: \"kubernetes.io/projected/fa21b1b5-7514-4555-9229-a01439384fd8-kube-api-access-2gmrm\") pod \"whisker-7549cdfd84-tb95c\" (UID: \"fa21b1b5-7514-4555-9229-a01439384fd8\") " pod="calico-system/whisker-7549cdfd84-tb95c" Nov 8 00:09:24.165163 kubelet[2597]: I1108 00:09:24.164974 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtknc\" (UniqueName: \"kubernetes.io/projected/ea25be93-453f-48e1-b6ef-e630315dd03d-kube-api-access-jtknc\") pod \"coredns-668d6bf9bc-ssphr\" (UID: \"ea25be93-453f-48e1-b6ef-e630315dd03d\") " pod="kube-system/coredns-668d6bf9bc-ssphr" Nov 8 00:09:24.165163 kubelet[2597]: I1108 00:09:24.165003 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c8208d0-d52c-4948-a7c4-1a012578a167-goldmane-ca-bundle\") pod \"goldmane-666569f655-c6ph6\" (UID: \"8c8208d0-d52c-4948-a7c4-1a012578a167\") " pod="calico-system/goldmane-666569f655-c6ph6" Nov 8 00:09:24.165304 kubelet[2597]: I1108 00:09:24.165022 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2v9j\" (UniqueName: \"kubernetes.io/projected/cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec-kube-api-access-x2v9j\") pod \"calico-apiserver-5b79fdfd8b-4hxxc\" (UID: \"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec\") " pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" Nov 8 00:09:24.165304 kubelet[2597]: I1108 00:09:24.165048 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-backend-key-pair\") pod \"whisker-7549cdfd84-tb95c\" (UID: \"fa21b1b5-7514-4555-9229-a01439384fd8\") " pod="calico-system/whisker-7549cdfd84-tb95c" Nov 8 00:09:24.315417 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef52fb72b58b7dd04bbdd6f5c018cb1757d79023eee07ba7cb92411fc9f0f911-rootfs.mount: Deactivated successfully. Nov 8 00:09:24.377594 containerd[1475]: time="2025-11-08T00:09:24.377523686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssphr,Uid:ea25be93-453f-48e1-b6ef-e630315dd03d,Namespace:kube-system,Attempt:0,}" Nov 8 00:09:24.416176 containerd[1475]: time="2025-11-08T00:09:24.415639295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs8fd,Uid:5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819,Namespace:kube-system,Attempt:0,}" Nov 8 00:09:24.425108 containerd[1475]: time="2025-11-08T00:09:24.425067058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c4f66b45d-rgxhv,Uid:d5333c3e-0b85-42ef-9987-96412959a46c,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:24.439457 containerd[1475]: time="2025-11-08T00:09:24.439399981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7549cdfd84-tb95c,Uid:fa21b1b5-7514-4555-9229-a01439384fd8,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:24.454272 containerd[1475]: time="2025-11-08T00:09:24.454230985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-4hxxc,Uid:cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:09:24.459107 containerd[1475]: time="2025-11-08T00:09:24.459000946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-6tnl5,Uid:bc4e8de7-6ddd-43cb-ba37-33083ff72076,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:09:24.465122 containerd[1475]: time="2025-11-08T00:09:24.465082707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c6ph6,Uid:8c8208d0-d52c-4948-a7c4-1a012578a167,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:24.547985 containerd[1475]: time="2025-11-08T00:09:24.547762528Z" level=error msg="Failed to destroy network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.552595 containerd[1475]: time="2025-11-08T00:09:24.552443969Z" level=error msg="encountered an error cleaning up failed sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.553150 containerd[1475]: time="2025-11-08T00:09:24.553026769Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssphr,Uid:ea25be93-453f-48e1-b6ef-e630315dd03d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.554170 kubelet[2597]: E1108 00:09:24.554018 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.554170 kubelet[2597]: E1108 00:09:24.554086 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ssphr" Nov 8 00:09:24.554170 kubelet[2597]: E1108 00:09:24.554108 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ssphr" Nov 8 00:09:24.554581 kubelet[2597]: E1108 00:09:24.554163 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ssphr_kube-system(ea25be93-453f-48e1-b6ef-e630315dd03d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ssphr_kube-system(ea25be93-453f-48e1-b6ef-e630315dd03d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ssphr" podUID="ea25be93-453f-48e1-b6ef-e630315dd03d" Nov 8 00:09:24.578237 containerd[1475]: time="2025-11-08T00:09:24.577248815Z" level=error msg="Failed to destroy network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.581172 containerd[1475]: time="2025-11-08T00:09:24.581082056Z" level=error msg="encountered an error cleaning up failed sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.581846 containerd[1475]: time="2025-11-08T00:09:24.581790336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs8fd,Uid:5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.582226 kubelet[2597]: E1108 00:09:24.582181 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.582310 kubelet[2597]: E1108 00:09:24.582242 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xs8fd" Nov 8 00:09:24.582310 kubelet[2597]: E1108 00:09:24.582262 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xs8fd" Nov 8 00:09:24.582310 kubelet[2597]: E1108 00:09:24.582298 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xs8fd_kube-system(5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xs8fd_kube-system(5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xs8fd" podUID="5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819" Nov 8 00:09:24.627901 containerd[1475]: time="2025-11-08T00:09:24.627840107Z" level=error msg="Failed to destroy network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.628331 containerd[1475]: time="2025-11-08T00:09:24.628221667Z" level=error msg="encountered an error cleaning up failed sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.628331 containerd[1475]: time="2025-11-08T00:09:24.628279987Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-4hxxc,Uid:cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.628487 containerd[1475]: time="2025-11-08T00:09:24.628396467Z" level=error msg="Failed to destroy network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.629020 kubelet[2597]: E1108 00:09:24.628795 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.629020 kubelet[2597]: E1108 00:09:24.628872 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" Nov 8 00:09:24.629020 kubelet[2597]: E1108 00:09:24.628900 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" Nov 8 00:09:24.629273 kubelet[2597]: E1108 00:09:24.628948 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:09:24.631981 containerd[1475]: time="2025-11-08T00:09:24.631844388Z" level=error msg="encountered an error cleaning up failed sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.631981 containerd[1475]: time="2025-11-08T00:09:24.631925548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7549cdfd84-tb95c,Uid:fa21b1b5-7514-4555-9229-a01439384fd8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.632540 kubelet[2597]: E1108 00:09:24.632283 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.632631 kubelet[2597]: E1108 00:09:24.632565 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7549cdfd84-tb95c" Nov 8 00:09:24.632662 kubelet[2597]: E1108 00:09:24.632622 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7549cdfd84-tb95c" Nov 8 00:09:24.632832 kubelet[2597]: E1108 00:09:24.632745 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7549cdfd84-tb95c_calico-system(fa21b1b5-7514-4555-9229-a01439384fd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7549cdfd84-tb95c_calico-system(fa21b1b5-7514-4555-9229-a01439384fd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7549cdfd84-tb95c" podUID="fa21b1b5-7514-4555-9229-a01439384fd8" Nov 8 00:09:24.666105 containerd[1475]: time="2025-11-08T00:09:24.665963437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:09:24.667261 kubelet[2597]: I1108 00:09:24.666556 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:24.669911 containerd[1475]: time="2025-11-08T00:09:24.669864437Z" level=info msg="StopPodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\"" Nov 8 00:09:24.671217 containerd[1475]: time="2025-11-08T00:09:24.670223158Z" level=info msg="Ensure that sandbox 6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437 in task-service has been cleanup successfully" Nov 8 00:09:24.678717 kubelet[2597]: I1108 00:09:24.677755 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:24.680734 containerd[1475]: time="2025-11-08T00:09:24.680693320Z" level=error msg="Failed to destroy network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.682544 containerd[1475]: time="2025-11-08T00:09:24.681247960Z" level=info msg="StopPodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\"" Nov 8 00:09:24.688262 containerd[1475]: time="2025-11-08T00:09:24.688207002Z" level=info msg="Ensure that sandbox cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297 in task-service has been cleanup successfully" Nov 8 00:09:24.688993 containerd[1475]: time="2025-11-08T00:09:24.688953282Z" level=error msg="encountered an error cleaning up failed sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.689859 containerd[1475]: time="2025-11-08T00:09:24.689812082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c6ph6,Uid:8c8208d0-d52c-4948-a7c4-1a012578a167,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.690690 kubelet[2597]: E1108 00:09:24.690369 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.690690 kubelet[2597]: E1108 00:09:24.690479 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c6ph6" Nov 8 00:09:24.690690 kubelet[2597]: E1108 00:09:24.690504 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-c6ph6" Nov 8 00:09:24.690827 kubelet[2597]: E1108 00:09:24.690650 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:09:24.693163 containerd[1475]: time="2025-11-08T00:09:24.693086323Z" level=error msg="Failed to destroy network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.694789 kubelet[2597]: I1108 00:09:24.694285 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:24.696183 containerd[1475]: time="2025-11-08T00:09:24.695486564Z" level=error msg="encountered an error cleaning up failed sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.696183 containerd[1475]: time="2025-11-08T00:09:24.695566644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c4f66b45d-rgxhv,Uid:d5333c3e-0b85-42ef-9987-96412959a46c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.696183 containerd[1475]: time="2025-11-08T00:09:24.695767804Z" level=info msg="StopPodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\"" Nov 8 00:09:24.696183 containerd[1475]: time="2025-11-08T00:09:24.695936324Z" level=info msg="Ensure that sandbox 468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28 in task-service has been cleanup successfully" Nov 8 00:09:24.696699 kubelet[2597]: E1108 00:09:24.696475 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.696699 kubelet[2597]: E1108 00:09:24.696540 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" Nov 8 00:09:24.696699 kubelet[2597]: E1108 00:09:24.696560 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" Nov 8 00:09:24.696814 kubelet[2597]: E1108 00:09:24.696597 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:09:24.711024 kubelet[2597]: I1108 00:09:24.710990 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:24.711668 containerd[1475]: time="2025-11-08T00:09:24.711629488Z" level=info msg="StopPodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\"" Nov 8 00:09:24.711847 containerd[1475]: time="2025-11-08T00:09:24.711825528Z" level=info msg="Ensure that sandbox c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f in task-service has been cleanup successfully" Nov 8 00:09:24.724173 containerd[1475]: time="2025-11-08T00:09:24.724059691Z" level=error msg="Failed to destroy network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.725112 containerd[1475]: time="2025-11-08T00:09:24.724972571Z" level=error msg="encountered an error cleaning up failed sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.725112 containerd[1475]: time="2025-11-08T00:09:24.725045891Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-6tnl5,Uid:bc4e8de7-6ddd-43cb-ba37-33083ff72076,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.725826 kubelet[2597]: E1108 00:09:24.725478 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.725826 kubelet[2597]: E1108 00:09:24.725539 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" Nov 8 00:09:24.725826 kubelet[2597]: E1108 00:09:24.725560 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" Nov 8 00:09:24.725954 kubelet[2597]: E1108 00:09:24.725597 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:09:24.755817 containerd[1475]: time="2025-11-08T00:09:24.755727858Z" level=error msg="StopPodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" failed" error="failed to destroy network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.756802 kubelet[2597]: E1108 00:09:24.756745 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:24.756901 kubelet[2597]: E1108 00:09:24.756829 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297"} Nov 8 00:09:24.756928 kubelet[2597]: E1108 00:09:24.756904 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa21b1b5-7514-4555-9229-a01439384fd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:24.756983 kubelet[2597]: E1108 00:09:24.756930 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa21b1b5-7514-4555-9229-a01439384fd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7549cdfd84-tb95c" podUID="fa21b1b5-7514-4555-9229-a01439384fd8" Nov 8 00:09:24.759983 containerd[1475]: time="2025-11-08T00:09:24.759692699Z" level=error msg="StopPodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" failed" error="failed to destroy network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.760082 kubelet[2597]: E1108 00:09:24.759911 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:24.760082 kubelet[2597]: E1108 00:09:24.759956 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437"} Nov 8 00:09:24.760082 kubelet[2597]: E1108 00:09:24.759987 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:24.760082 kubelet[2597]: E1108 00:09:24.760009 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:09:24.772289 containerd[1475]: time="2025-11-08T00:09:24.772073982Z" level=error msg="StopPodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" failed" error="failed to destroy network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.773066 kubelet[2597]: E1108 00:09:24.772728 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:24.773066 kubelet[2597]: E1108 00:09:24.772796 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28"} Nov 8 00:09:24.773066 kubelet[2597]: E1108 00:09:24.772833 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:24.773066 kubelet[2597]: E1108 00:09:24.772856 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xs8fd" podUID="5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819" Nov 8 00:09:24.779241 containerd[1475]: time="2025-11-08T00:09:24.778806664Z" level=error msg="StopPodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" failed" error="failed to destroy network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:24.779393 kubelet[2597]: E1108 00:09:24.779055 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:24.779393 kubelet[2597]: E1108 00:09:24.779106 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f"} Nov 8 00:09:24.779393 kubelet[2597]: E1108 00:09:24.779162 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea25be93-453f-48e1-b6ef-e630315dd03d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:24.779393 kubelet[2597]: E1108 00:09:24.779185 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea25be93-453f-48e1-b6ef-e630315dd03d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ssphr" podUID="ea25be93-453f-48e1-b6ef-e630315dd03d" Nov 8 00:09:25.313640 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28-shm.mount: Deactivated successfully. Nov 8 00:09:25.313784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f-shm.mount: Deactivated successfully. Nov 8 00:09:25.500918 systemd[1]: Created slice kubepods-besteffort-pod8da5e4ab_3d6f_46a3_91d8_e794f2481a0b.slice - libcontainer container kubepods-besteffort-pod8da5e4ab_3d6f_46a3_91d8_e794f2481a0b.slice. Nov 8 00:09:25.503405 containerd[1475]: time="2025-11-08T00:09:25.503363033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4z57,Uid:8da5e4ab-3d6f-46a3-91d8-e794f2481a0b,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:25.564664 containerd[1475]: time="2025-11-08T00:09:25.564469447Z" level=error msg="Failed to destroy network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.564806 sshd[2640]: Connection reset by 147.139.164.196 port 6102 [preauth] Nov 8 00:09:25.573543 kubelet[2597]: E1108 00:09:25.567416 2597 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.573543 kubelet[2597]: E1108 00:09:25.567491 2597 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:25.573543 kubelet[2597]: E1108 00:09:25.567514 2597 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l4z57" Nov 8 00:09:25.569789 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766-shm.mount: Deactivated successfully. Nov 8 00:09:25.574071 containerd[1475]: time="2025-11-08T00:09:25.567115128Z" level=error msg="encountered an error cleaning up failed sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.574071 containerd[1475]: time="2025-11-08T00:09:25.567190568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4z57,Uid:8da5e4ab-3d6f-46a3-91d8-e794f2481a0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.574271 kubelet[2597]: E1108 00:09:25.567561 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:25.570940 systemd[1]: sshd@7-46.224.11.50:22-147.139.164.196:6102.service: Deactivated successfully. Nov 8 00:09:25.716010 kubelet[2597]: I1108 00:09:25.715956 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:25.717863 containerd[1475]: time="2025-11-08T00:09:25.716981362Z" level=info msg="StopPodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\"" Nov 8 00:09:25.718010 containerd[1475]: time="2025-11-08T00:09:25.717712842Z" level=info msg="Ensure that sandbox eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5 in task-service has been cleanup successfully" Nov 8 00:09:25.719269 kubelet[2597]: I1108 00:09:25.719186 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:25.720295 containerd[1475]: time="2025-11-08T00:09:25.720253683Z" level=info msg="StopPodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\"" Nov 8 00:09:25.720447 containerd[1475]: time="2025-11-08T00:09:25.720407243Z" level=info msg="Ensure that sandbox 3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc in task-service has been cleanup successfully" Nov 8 00:09:25.722484 kubelet[2597]: I1108 00:09:25.722451 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:25.723873 containerd[1475]: time="2025-11-08T00:09:25.723834684Z" level=info msg="StopPodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\"" Nov 8 00:09:25.724253 containerd[1475]: time="2025-11-08T00:09:25.724034324Z" level=info msg="Ensure that sandbox f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7 in task-service has been cleanup successfully" Nov 8 00:09:25.727938 kubelet[2597]: I1108 00:09:25.727077 2597 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:25.729857 containerd[1475]: time="2025-11-08T00:09:25.729416885Z" level=info msg="StopPodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\"" Nov 8 00:09:25.730032 containerd[1475]: time="2025-11-08T00:09:25.729974565Z" level=info msg="Ensure that sandbox a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766 in task-service has been cleanup successfully" Nov 8 00:09:25.778333 containerd[1475]: time="2025-11-08T00:09:25.778250936Z" level=error msg="StopPodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" failed" error="failed to destroy network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.779233 kubelet[2597]: E1108 00:09:25.779194 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:25.779333 kubelet[2597]: E1108 00:09:25.779239 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5"} Nov 8 00:09:25.779333 kubelet[2597]: E1108 00:09:25.779288 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5333c3e-0b85-42ef-9987-96412959a46c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:25.779333 kubelet[2597]: E1108 00:09:25.779308 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5333c3e-0b85-42ef-9987-96412959a46c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:09:25.791506 containerd[1475]: time="2025-11-08T00:09:25.791381419Z" level=error msg="StopPodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" failed" error="failed to destroy network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.791914 kubelet[2597]: E1108 00:09:25.791666 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:25.791914 kubelet[2597]: E1108 00:09:25.791717 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc"} Nov 8 00:09:25.791914 kubelet[2597]: E1108 00:09:25.791750 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c8208d0-d52c-4948-a7c4-1a012578a167\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:25.791914 kubelet[2597]: E1108 00:09:25.791775 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c8208d0-d52c-4948-a7c4-1a012578a167\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:09:25.795046 containerd[1475]: time="2025-11-08T00:09:25.794380980Z" level=error msg="StopPodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" failed" error="failed to destroy network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.795185 kubelet[2597]: E1108 00:09:25.794685 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:25.795185 kubelet[2597]: E1108 00:09:25.794761 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766"} Nov 8 00:09:25.795185 kubelet[2597]: E1108 00:09:25.794806 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:25.795185 kubelet[2597]: E1108 00:09:25.794827 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:25.796770 containerd[1475]: time="2025-11-08T00:09:25.796676821Z" level=error msg="StopPodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" failed" error="failed to destroy network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:09:25.797221 kubelet[2597]: E1108 00:09:25.796928 2597 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:25.797221 kubelet[2597]: E1108 00:09:25.796974 2597 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7"} Nov 8 00:09:25.797221 kubelet[2597]: E1108 00:09:25.797003 2597 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bc4e8de7-6ddd-43cb-ba37-33083ff72076\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:09:25.797221 kubelet[2597]: E1108 00:09:25.797032 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bc4e8de7-6ddd-43cb-ba37-33083ff72076\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:09:29.048585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898255247.mount: Deactivated successfully. Nov 8 00:09:29.080769 containerd[1475]: time="2025-11-08T00:09:29.080703286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:29.082871 containerd[1475]: time="2025-11-08T00:09:29.082809766Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 8 00:09:29.084501 containerd[1475]: time="2025-11-08T00:09:29.083897126Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:29.087738 containerd[1475]: time="2025-11-08T00:09:29.087464207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:09:29.088266 containerd[1475]: time="2025-11-08T00:09:29.088216807Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.42072493s" Nov 8 00:09:29.088266 containerd[1475]: time="2025-11-08T00:09:29.088265447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 8 00:09:29.106813 containerd[1475]: time="2025-11-08T00:09:29.106608650Z" level=info msg="CreateContainer within sandbox \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:09:29.125051 containerd[1475]: time="2025-11-08T00:09:29.124879454Z" level=info msg="CreateContainer within sandbox \"1974f27a45246946a6ca203d291f7229150a450cba4bf847a010f15555799d92\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f\"" Nov 8 00:09:29.126156 containerd[1475]: time="2025-11-08T00:09:29.126083694Z" level=info msg="StartContainer for \"5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f\"" Nov 8 00:09:29.159606 systemd[1]: Started cri-containerd-5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f.scope - libcontainer container 5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f. Nov 8 00:09:29.197802 containerd[1475]: time="2025-11-08T00:09:29.197759587Z" level=info msg="StartContainer for \"5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f\" returns successfully" Nov 8 00:09:29.360498 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:09:29.360650 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:09:29.543750 containerd[1475]: time="2025-11-08T00:09:29.542830608Z" level=info msg="StopPodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\"" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.646 [INFO][3792] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.646 [INFO][3792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" iface="eth0" netns="/var/run/netns/cni-d6148d51-7f76-ce8e-9405-3f5322a0ea5e" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.647 [INFO][3792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" iface="eth0" netns="/var/run/netns/cni-d6148d51-7f76-ce8e-9405-3f5322a0ea5e" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.649 [INFO][3792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" iface="eth0" netns="/var/run/netns/cni-d6148d51-7f76-ce8e-9405-3f5322a0ea5e" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.649 [INFO][3792] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.649 [INFO][3792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.700 [INFO][3805] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.700 [INFO][3805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.700 [INFO][3805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.717 [WARNING][3805] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.717 [INFO][3805] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.724 [INFO][3805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:29.728390 containerd[1475]: 2025-11-08 00:09:29.726 [INFO][3792] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:29.729583 containerd[1475]: time="2025-11-08T00:09:29.729527601Z" level=info msg="TearDown network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" successfully" Nov 8 00:09:29.729583 containerd[1475]: time="2025-11-08T00:09:29.729580881Z" level=info msg="StopPodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" returns successfully" Nov 8 00:09:29.774169 kubelet[2597]: I1108 00:09:29.773920 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8k2vx" podStartSLOduration=2.095491242 podStartE2EDuration="14.773811288s" podCreationTimestamp="2025-11-08 00:09:15 +0000 UTC" firstStartedPulling="2025-11-08 00:09:16.411263641 +0000 UTC m=+29.050342137" lastFinishedPulling="2025-11-08 00:09:29.089583687 +0000 UTC m=+41.728662183" observedRunningTime="2025-11-08 00:09:29.772244088 +0000 UTC m=+42.411322584" watchObservedRunningTime="2025-11-08 00:09:29.773811288 +0000 UTC m=+42.412889784" Nov 8 00:09:29.804936 kubelet[2597]: I1108 00:09:29.804893 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gmrm\" (UniqueName: \"kubernetes.io/projected/fa21b1b5-7514-4555-9229-a01439384fd8-kube-api-access-2gmrm\") pod \"fa21b1b5-7514-4555-9229-a01439384fd8\" (UID: \"fa21b1b5-7514-4555-9229-a01439384fd8\") " Nov 8 00:09:29.804936 kubelet[2597]: I1108 00:09:29.804951 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-ca-bundle\") pod \"fa21b1b5-7514-4555-9229-a01439384fd8\" (UID: \"fa21b1b5-7514-4555-9229-a01439384fd8\") " Nov 8 00:09:29.805119 kubelet[2597]: I1108 00:09:29.804976 2597 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-backend-key-pair\") pod \"fa21b1b5-7514-4555-9229-a01439384fd8\" (UID: \"fa21b1b5-7514-4555-9229-a01439384fd8\") " Nov 8 00:09:29.811157 kubelet[2597]: I1108 00:09:29.810869 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fa21b1b5-7514-4555-9229-a01439384fd8" (UID: "fa21b1b5-7514-4555-9229-a01439384fd8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:09:29.812620 kubelet[2597]: I1108 00:09:29.812594 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fa21b1b5-7514-4555-9229-a01439384fd8" (UID: "fa21b1b5-7514-4555-9229-a01439384fd8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:09:29.812759 kubelet[2597]: I1108 00:09:29.812729 2597 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa21b1b5-7514-4555-9229-a01439384fd8-kube-api-access-2gmrm" (OuterVolumeSpecName: "kube-api-access-2gmrm") pod "fa21b1b5-7514-4555-9229-a01439384fd8" (UID: "fa21b1b5-7514-4555-9229-a01439384fd8"). InnerVolumeSpecName "kube-api-access-2gmrm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:09:29.905938 kubelet[2597]: I1108 00:09:29.905879 2597 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2gmrm\" (UniqueName: \"kubernetes.io/projected/fa21b1b5-7514-4555-9229-a01439384fd8-kube-api-access-2gmrm\") on node \"ci-4081-3-6-n-fb20dfd731\" DevicePath \"\"" Nov 8 00:09:29.905938 kubelet[2597]: I1108 00:09:29.905934 2597 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-ca-bundle\") on node \"ci-4081-3-6-n-fb20dfd731\" DevicePath \"\"" Nov 8 00:09:29.906118 kubelet[2597]: I1108 00:09:29.905957 2597 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fa21b1b5-7514-4555-9229-a01439384fd8-whisker-backend-key-pair\") on node \"ci-4081-3-6-n-fb20dfd731\" DevicePath \"\"" Nov 8 00:09:30.050373 systemd[1]: run-netns-cni\x2dd6148d51\x2d7f76\x2dce8e\x2d9405\x2d3f5322a0ea5e.mount: Deactivated successfully. Nov 8 00:09:30.050483 systemd[1]: var-lib-kubelet-pods-fa21b1b5\x2d7514\x2d4555\x2d9229\x2da01439384fd8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2gmrm.mount: Deactivated successfully. Nov 8 00:09:30.050544 systemd[1]: var-lib-kubelet-pods-fa21b1b5\x2d7514\x2d4555\x2d9229\x2da01439384fd8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:09:30.658778 kubelet[2597]: I1108 00:09:30.658721 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:30.751704 kubelet[2597]: I1108 00:09:30.750938 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:30.762213 systemd[1]: Removed slice kubepods-besteffort-podfa21b1b5_7514_4555_9229_a01439384fd8.slice - libcontainer container kubepods-besteffort-podfa21b1b5_7514_4555_9229_a01439384fd8.slice. Nov 8 00:09:30.838721 systemd[1]: Created slice kubepods-besteffort-pod2215902e_6443_46b1_ae69_123ea2434f7b.slice - libcontainer container kubepods-besteffort-pod2215902e_6443_46b1_ae69_123ea2434f7b.slice. Nov 8 00:09:30.913104 kubelet[2597]: I1108 00:09:30.912976 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2215902e-6443-46b1-ae69-123ea2434f7b-whisker-backend-key-pair\") pod \"whisker-74965ffc78-dt582\" (UID: \"2215902e-6443-46b1-ae69-123ea2434f7b\") " pod="calico-system/whisker-74965ffc78-dt582" Nov 8 00:09:30.913942 kubelet[2597]: I1108 00:09:30.913200 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2215902e-6443-46b1-ae69-123ea2434f7b-whisker-ca-bundle\") pod \"whisker-74965ffc78-dt582\" (UID: \"2215902e-6443-46b1-ae69-123ea2434f7b\") " pod="calico-system/whisker-74965ffc78-dt582" Nov 8 00:09:30.913942 kubelet[2597]: I1108 00:09:30.913259 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcwhx\" (UniqueName: \"kubernetes.io/projected/2215902e-6443-46b1-ae69-123ea2434f7b-kube-api-access-xcwhx\") pod \"whisker-74965ffc78-dt582\" (UID: \"2215902e-6443-46b1-ae69-123ea2434f7b\") " pod="calico-system/whisker-74965ffc78-dt582" Nov 8 00:09:31.144864 containerd[1475]: time="2025-11-08T00:09:31.144811917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74965ffc78-dt582,Uid:2215902e-6443-46b1-ae69-123ea2434f7b,Namespace:calico-system,Attempt:0,}" Nov 8 00:09:31.349960 systemd-networkd[1380]: calib477c9e2d97: Link UP Nov 8 00:09:31.352674 systemd-networkd[1380]: calib477c9e2d97: Gained carrier Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.197 [INFO][3922] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.221 [INFO][3922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0 whisker-74965ffc78- calico-system 2215902e-6443-46b1-ae69-123ea2434f7b 881 0 2025-11-08 00:09:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74965ffc78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 whisker-74965ffc78-dt582 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib477c9e2d97 [] [] }} ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.222 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.272 [INFO][3935] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" HandleID="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.272 [INFO][3935] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" HandleID="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"whisker-74965ffc78-dt582", "timestamp":"2025-11-08 00:09:31.272336377 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.272 [INFO][3935] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.272 [INFO][3935] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.272 [INFO][3935] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.292 [INFO][3935] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.300 [INFO][3935] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.308 [INFO][3935] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.312 [INFO][3935] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.315 [INFO][3935] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.316 [INFO][3935] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.318 [INFO][3935] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11 Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.325 [INFO][3935] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.332 [INFO][3935] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.65/26] block=192.168.72.64/26 handle="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.332 [INFO][3935] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.65/26] handle="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.332 [INFO][3935] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:31.383613 containerd[1475]: 2025-11-08 00:09:31.332 [INFO][3935] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.65/26] IPv6=[] ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" HandleID="k8s-pod-network.c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.384208 containerd[1475]: 2025-11-08 00:09:31.335 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0", GenerateName:"whisker-74965ffc78-", Namespace:"calico-system", SelfLink:"", UID:"2215902e-6443-46b1-ae69-123ea2434f7b", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74965ffc78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"whisker-74965ffc78-dt582", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.72.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib477c9e2d97", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:31.384208 containerd[1475]: 2025-11-08 00:09:31.335 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.65/32] ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.384208 containerd[1475]: 2025-11-08 00:09:31.335 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib477c9e2d97 ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.384208 containerd[1475]: 2025-11-08 00:09:31.355 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.384208 containerd[1475]: 2025-11-08 00:09:31.355 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0", GenerateName:"whisker-74965ffc78-", Namespace:"calico-system", SelfLink:"", UID:"2215902e-6443-46b1-ae69-123ea2434f7b", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74965ffc78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11", Pod:"whisker-74965ffc78-dt582", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.72.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib477c9e2d97", MAC:"0a:8d:6b:13:7b:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:31.384208 containerd[1475]: 2025-11-08 00:09:31.380 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11" Namespace="calico-system" Pod="whisker-74965ffc78-dt582" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--74965ffc78--dt582-eth0" Nov 8 00:09:31.403202 kernel: bpftool[3974]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:09:31.419214 containerd[1475]: time="2025-11-08T00:09:31.417979839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:31.423672 containerd[1475]: time="2025-11-08T00:09:31.421562920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:31.423672 containerd[1475]: time="2025-11-08T00:09:31.421593560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:31.423672 containerd[1475]: time="2025-11-08T00:09:31.421692240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:31.445751 systemd[1]: run-containerd-runc-k8s.io-c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11-runc.xIKQop.mount: Deactivated successfully. Nov 8 00:09:31.457401 systemd[1]: Started cri-containerd-c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11.scope - libcontainer container c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11. Nov 8 00:09:31.495493 kubelet[2597]: I1108 00:09:31.495446 2597 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa21b1b5-7514-4555-9229-a01439384fd8" path="/var/lib/kubelet/pods/fa21b1b5-7514-4555-9229-a01439384fd8/volumes" Nov 8 00:09:31.508818 containerd[1475]: time="2025-11-08T00:09:31.508744733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74965ffc78-dt582,Uid:2215902e-6443-46b1-ae69-123ea2434f7b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9ad0681739e1778d5f4f11a4e7258372f4006826dd231d9bbb71acc24619e11\"" Nov 8 00:09:31.510701 containerd[1475]: time="2025-11-08T00:09:31.510544054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:09:31.634240 systemd-networkd[1380]: vxlan.calico: Link UP Nov 8 00:09:31.634252 systemd-networkd[1380]: vxlan.calico: Gained carrier Nov 8 00:09:31.866591 containerd[1475]: time="2025-11-08T00:09:31.866213109Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:31.867888 containerd[1475]: time="2025-11-08T00:09:31.867680029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:09:31.867888 containerd[1475]: time="2025-11-08T00:09:31.867798149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:09:31.868301 kubelet[2597]: E1108 00:09:31.868219 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:31.868507 kubelet[2597]: E1108 00:09:31.868414 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:31.873104 kubelet[2597]: E1108 00:09:31.872904 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fef3b1f193134241bf57d7645ef8585e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:31.875158 containerd[1475]: time="2025-11-08T00:09:31.875003030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:09:32.234355 containerd[1475]: time="2025-11-08T00:09:32.234105724Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:32.236008 containerd[1475]: time="2025-11-08T00:09:32.235876284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:09:32.236008 containerd[1475]: time="2025-11-08T00:09:32.235947084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:32.238086 kubelet[2597]: E1108 00:09:32.236152 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:32.238086 kubelet[2597]: E1108 00:09:32.236207 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:32.238592 kubelet[2597]: E1108 00:09:32.236346 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:32.241406 kubelet[2597]: E1108 00:09:32.241333 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:09:32.770624 kubelet[2597]: E1108 00:09:32.770497 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:09:32.861647 systemd-networkd[1380]: calib477c9e2d97: Gained IPv6LL Nov 8 00:09:33.502331 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Nov 8 00:09:34.385037 kubelet[2597]: I1108 00:09:34.383842 2597 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:09:34.544043 systemd[1]: run-containerd-runc-k8s.io-5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f-runc.7XSzle.mount: Deactivated successfully. Nov 8 00:09:36.493715 containerd[1475]: time="2025-11-08T00:09:36.493353876Z" level=info msg="StopPodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\"" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.568 [INFO][4150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.568 [INFO][4150] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" iface="eth0" netns="/var/run/netns/cni-a4b92769-d0f0-1f6c-bd1c-5e70027312b2" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.568 [INFO][4150] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" iface="eth0" netns="/var/run/netns/cni-a4b92769-d0f0-1f6c-bd1c-5e70027312b2" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.569 [INFO][4150] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" iface="eth0" netns="/var/run/netns/cni-a4b92769-d0f0-1f6c-bd1c-5e70027312b2" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.569 [INFO][4150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.569 [INFO][4150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.608 [INFO][4158] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.609 [INFO][4158] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.609 [INFO][4158] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.623 [WARNING][4158] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.623 [INFO][4158] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.627 [INFO][4158] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:36.632281 containerd[1475]: 2025-11-08 00:09:36.630 [INFO][4150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:36.636206 containerd[1475]: time="2025-11-08T00:09:36.636151492Z" level=info msg="TearDown network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" successfully" Nov 8 00:09:36.636206 containerd[1475]: time="2025-11-08T00:09:36.636203452Z" level=info msg="StopPodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" returns successfully" Nov 8 00:09:36.638005 containerd[1475]: time="2025-11-08T00:09:36.637956412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-4hxxc,Uid:cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:09:36.639660 systemd[1]: run-netns-cni\x2da4b92769\x2dd0f0\x2d1f6c\x2dbd1c\x2d5e70027312b2.mount: Deactivated successfully. Nov 8 00:09:36.825809 systemd-networkd[1380]: calia7887ab0af6: Link UP Nov 8 00:09:36.830702 systemd-networkd[1380]: calia7887ab0af6: Gained carrier Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.719 [INFO][4166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0 calico-apiserver-5b79fdfd8b- calico-apiserver cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec 917 0 2025-11-08 00:09:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b79fdfd8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 calico-apiserver-5b79fdfd8b-4hxxc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7887ab0af6 [] [] }} ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.720 [INFO][4166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.754 [INFO][4178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" HandleID="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.754 [INFO][4178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" HandleID="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"calico-apiserver-5b79fdfd8b-4hxxc", "timestamp":"2025-11-08 00:09:36.754252265 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.754 [INFO][4178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.754 [INFO][4178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.754 [INFO][4178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.771 [INFO][4178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.783 [INFO][4178] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.792 [INFO][4178] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.795 [INFO][4178] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.799 [INFO][4178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.799 [INFO][4178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.801 [INFO][4178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346 Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.809 [INFO][4178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.819 [INFO][4178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.66/26] block=192.168.72.64/26 handle="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.819 [INFO][4178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.66/26] handle="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.819 [INFO][4178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:36.854666 containerd[1475]: 2025-11-08 00:09:36.819 [INFO][4178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.66/26] IPv6=[] ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" HandleID="k8s-pod-network.1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.856029 containerd[1475]: 2025-11-08 00:09:36.821 [INFO][4166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"calico-apiserver-5b79fdfd8b-4hxxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7887ab0af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:36.856029 containerd[1475]: 2025-11-08 00:09:36.822 [INFO][4166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.66/32] ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.856029 containerd[1475]: 2025-11-08 00:09:36.822 [INFO][4166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7887ab0af6 ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.856029 containerd[1475]: 2025-11-08 00:09:36.830 [INFO][4166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.856029 containerd[1475]: 2025-11-08 00:09:36.831 [INFO][4166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346", Pod:"calico-apiserver-5b79fdfd8b-4hxxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7887ab0af6", MAC:"e6:0c:6e:e0:61:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:36.856029 containerd[1475]: 2025-11-08 00:09:36.850 [INFO][4166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-4hxxc" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:36.889964 containerd[1475]: time="2025-11-08T00:09:36.889712320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:36.889964 containerd[1475]: time="2025-11-08T00:09:36.889786120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:36.889964 containerd[1475]: time="2025-11-08T00:09:36.889802080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:36.890289 containerd[1475]: time="2025-11-08T00:09:36.890110040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:36.925355 systemd[1]: Started cri-containerd-1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346.scope - libcontainer container 1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346. Nov 8 00:09:37.114153 containerd[1475]: time="2025-11-08T00:09:37.114093905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-4hxxc,Uid:cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346\"" Nov 8 00:09:37.118532 containerd[1475]: time="2025-11-08T00:09:37.118479065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:37.495740 containerd[1475]: time="2025-11-08T00:09:37.495625985Z" level=info msg="StopPodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\"" Nov 8 00:09:37.509686 containerd[1475]: time="2025-11-08T00:09:37.509642067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:37.511655 containerd[1475]: time="2025-11-08T00:09:37.511597627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:37.512043 kubelet[2597]: E1108 00:09:37.512003 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:37.513034 kubelet[2597]: E1108 00:09:37.512626 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:37.513388 containerd[1475]: time="2025-11-08T00:09:37.511785907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:37.514198 kubelet[2597]: E1108 00:09:37.513956 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:37.517661 kubelet[2597]: E1108 00:09:37.517616 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.600 [INFO][4250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.600 [INFO][4250] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" iface="eth0" netns="/var/run/netns/cni-9c800c48-6181-b1b7-25af-d6371b2ba067" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.601 [INFO][4250] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" iface="eth0" netns="/var/run/netns/cni-9c800c48-6181-b1b7-25af-d6371b2ba067" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.604 [INFO][4250] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" iface="eth0" netns="/var/run/netns/cni-9c800c48-6181-b1b7-25af-d6371b2ba067" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.604 [INFO][4250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.604 [INFO][4250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.641 [INFO][4257] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.641 [INFO][4257] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.641 [INFO][4257] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.661 [WARNING][4257] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.661 [INFO][4257] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.670 [INFO][4257] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:37.674897 containerd[1475]: 2025-11-08 00:09:37.673 [INFO][4250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:37.675865 containerd[1475]: time="2025-11-08T00:09:37.675782244Z" level=info msg="TearDown network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" successfully" Nov 8 00:09:37.675865 containerd[1475]: time="2025-11-08T00:09:37.675819684Z" level=info msg="StopPodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" returns successfully" Nov 8 00:09:37.677826 systemd[1]: run-netns-cni\x2d9c800c48\x2d6181\x2db1b7\x2d25af\x2dd6371b2ba067.mount: Deactivated successfully. Nov 8 00:09:37.689231 containerd[1475]: time="2025-11-08T00:09:37.689146686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c4f66b45d-rgxhv,Uid:d5333c3e-0b85-42ef-9987-96412959a46c,Namespace:calico-system,Attempt:1,}" Nov 8 00:09:37.805208 kubelet[2597]: E1108 00:09:37.804609 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:09:37.890473 systemd-networkd[1380]: cali0f4d69a431e: Link UP Nov 8 00:09:37.891436 systemd-networkd[1380]: cali0f4d69a431e: Gained carrier Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.756 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0 calico-kube-controllers-7c4f66b45d- calico-system d5333c3e-0b85-42ef-9987-96412959a46c 926 0 2025-11-08 00:09:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c4f66b45d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 calico-kube-controllers-7c4f66b45d-rgxhv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0f4d69a431e [] [] }} ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.756 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.792 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" HandleID="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.793 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" HandleID="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"calico-kube-controllers-7c4f66b45d-rgxhv", "timestamp":"2025-11-08 00:09:37.792098696 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.793 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.793 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.793 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.819 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.840 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.850 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.854 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.857 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.857 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.861 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66 Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.867 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.878 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.67/26] block=192.168.72.64/26 handle="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.878 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.67/26] handle="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.879 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:37.917785 containerd[1475]: 2025-11-08 00:09:37.879 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.67/26] IPv6=[] ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" HandleID="k8s-pod-network.223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.918790 containerd[1475]: 2025-11-08 00:09:37.884 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0", GenerateName:"calico-kube-controllers-7c4f66b45d-", Namespace:"calico-system", SelfLink:"", UID:"d5333c3e-0b85-42ef-9987-96412959a46c", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c4f66b45d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"calico-kube-controllers-7c4f66b45d-rgxhv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f4d69a431e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:37.918790 containerd[1475]: 2025-11-08 00:09:37.884 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.67/32] ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.918790 containerd[1475]: 2025-11-08 00:09:37.884 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f4d69a431e ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.918790 containerd[1475]: 2025-11-08 00:09:37.893 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.918790 containerd[1475]: 2025-11-08 00:09:37.893 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0", GenerateName:"calico-kube-controllers-7c4f66b45d-", Namespace:"calico-system", SelfLink:"", UID:"d5333c3e-0b85-42ef-9987-96412959a46c", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c4f66b45d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66", Pod:"calico-kube-controllers-7c4f66b45d-rgxhv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f4d69a431e", MAC:"ba:36:7a:cf:7f:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:37.918790 containerd[1475]: 2025-11-08 00:09:37.913 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66" Namespace="calico-system" Pod="calico-kube-controllers-7c4f66b45d-rgxhv" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:37.947619 containerd[1475]: time="2025-11-08T00:09:37.946517673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:37.947619 containerd[1475]: time="2025-11-08T00:09:37.946831033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:37.947619 containerd[1475]: time="2025-11-08T00:09:37.947303073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:37.948642 containerd[1475]: time="2025-11-08T00:09:37.948511873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:37.987434 systemd[1]: Started cri-containerd-223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66.scope - libcontainer container 223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66. Nov 8 00:09:38.080319 containerd[1475]: time="2025-11-08T00:09:38.080181766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c4f66b45d-rgxhv,Uid:d5333c3e-0b85-42ef-9987-96412959a46c,Namespace:calico-system,Attempt:1,} returns sandbox id \"223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66\"" Nov 8 00:09:38.103194 containerd[1475]: time="2025-11-08T00:09:38.102757649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:09:38.173511 systemd-networkd[1380]: calia7887ab0af6: Gained IPv6LL Nov 8 00:09:38.441499 containerd[1475]: time="2025-11-08T00:09:38.441363962Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:38.443878 containerd[1475]: time="2025-11-08T00:09:38.443694642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:09:38.443878 containerd[1475]: time="2025-11-08T00:09:38.443833402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:38.444517 kubelet[2597]: E1108 00:09:38.443986 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:38.444517 kubelet[2597]: E1108 00:09:38.444037 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:38.449186 kubelet[2597]: E1108 00:09:38.449101 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:38.450697 kubelet[2597]: E1108 00:09:38.450346 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:09:38.497307 containerd[1475]: time="2025-11-08T00:09:38.496964888Z" level=info msg="StopPodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\"" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.558 [INFO][4342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.558 [INFO][4342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" iface="eth0" netns="/var/run/netns/cni-db6df131-1f87-2968-835b-a75221bc1f58" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.559 [INFO][4342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" iface="eth0" netns="/var/run/netns/cni-db6df131-1f87-2968-835b-a75221bc1f58" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.559 [INFO][4342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" iface="eth0" netns="/var/run/netns/cni-db6df131-1f87-2968-835b-a75221bc1f58" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.559 [INFO][4342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.559 [INFO][4342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.603 [INFO][4350] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.603 [INFO][4350] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.604 [INFO][4350] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.620 [WARNING][4350] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.620 [INFO][4350] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.623 [INFO][4350] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:38.630751 containerd[1475]: 2025-11-08 00:09:38.626 [INFO][4342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:38.631824 containerd[1475]: time="2025-11-08T00:09:38.631201541Z" level=info msg="TearDown network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" successfully" Nov 8 00:09:38.631824 containerd[1475]: time="2025-11-08T00:09:38.631235101Z" level=info msg="StopPodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" returns successfully" Nov 8 00:09:38.632439 containerd[1475]: time="2025-11-08T00:09:38.632401901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-6tnl5,Uid:bc4e8de7-6ddd-43cb-ba37-33083ff72076,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:09:38.682622 systemd[1]: run-netns-cni\x2ddb6df131\x2d1f87\x2d2968\x2d835b\x2da75221bc1f58.mount: Deactivated successfully. Nov 8 00:09:38.818846 kubelet[2597]: E1108 00:09:38.817663 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:09:38.820762 kubelet[2597]: E1108 00:09:38.820362 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:09:38.844547 systemd-networkd[1380]: cali04d43bf3b05: Link UP Nov 8 00:09:38.847273 systemd-networkd[1380]: cali04d43bf3b05: Gained carrier Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.723 [INFO][4357] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0 calico-apiserver-5b79fdfd8b- calico-apiserver bc4e8de7-6ddd-43cb-ba37-33083ff72076 941 0 2025-11-08 00:09:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b79fdfd8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 calico-apiserver-5b79fdfd8b-6tnl5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali04d43bf3b05 [] [] }} ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.723 [INFO][4357] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.760 [INFO][4369] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" HandleID="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.761 [INFO][4369] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" HandleID="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"calico-apiserver-5b79fdfd8b-6tnl5", "timestamp":"2025-11-08 00:09:38.760930234 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.761 [INFO][4369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.761 [INFO][4369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.761 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.777 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.784 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.791 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.794 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.799 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.800 [INFO][4369] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.805 [INFO][4369] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04 Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.819 [INFO][4369] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.833 [INFO][4369] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.68/26] block=192.168.72.64/26 handle="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.833 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.68/26] handle="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.833 [INFO][4369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:38.880331 containerd[1475]: 2025-11-08 00:09:38.833 [INFO][4369] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.68/26] IPv6=[] ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" HandleID="k8s-pod-network.fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.880923 containerd[1475]: 2025-11-08 00:09:38.838 [INFO][4357] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc4e8de7-6ddd-43cb-ba37-33083ff72076", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"calico-apiserver-5b79fdfd8b-6tnl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d43bf3b05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:38.880923 containerd[1475]: 2025-11-08 00:09:38.838 [INFO][4357] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.68/32] ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.880923 containerd[1475]: 2025-11-08 00:09:38.838 [INFO][4357] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04d43bf3b05 ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.880923 containerd[1475]: 2025-11-08 00:09:38.849 [INFO][4357] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.880923 containerd[1475]: 2025-11-08 00:09:38.852 [INFO][4357] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc4e8de7-6ddd-43cb-ba37-33083ff72076", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04", Pod:"calico-apiserver-5b79fdfd8b-6tnl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d43bf3b05", MAC:"e6:6d:5d:d9:8b:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:38.880923 containerd[1475]: 2025-11-08 00:09:38.878 [INFO][4357] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04" Namespace="calico-apiserver" Pod="calico-apiserver-5b79fdfd8b-6tnl5" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:38.915155 containerd[1475]: time="2025-11-08T00:09:38.910721768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:38.915155 containerd[1475]: time="2025-11-08T00:09:38.910786168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:38.915155 containerd[1475]: time="2025-11-08T00:09:38.910801288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:38.915155 containerd[1475]: time="2025-11-08T00:09:38.910883608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:38.954943 systemd[1]: Started cri-containerd-fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04.scope - libcontainer container fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04. Nov 8 00:09:39.008280 containerd[1475]: time="2025-11-08T00:09:39.008233338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b79fdfd8b-6tnl5,Uid:bc4e8de7-6ddd-43cb-ba37-33083ff72076,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04\"" Nov 8 00:09:39.012042 containerd[1475]: time="2025-11-08T00:09:39.011991658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:39.362514 containerd[1475]: time="2025-11-08T00:09:39.362462491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:39.364124 containerd[1475]: time="2025-11-08T00:09:39.364038491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:39.364238 containerd[1475]: time="2025-11-08T00:09:39.364158131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:39.364373 kubelet[2597]: E1108 00:09:39.364283 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:39.364373 kubelet[2597]: E1108 00:09:39.364354 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:39.364538 kubelet[2597]: E1108 00:09:39.364479 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hfm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:39.366118 kubelet[2597]: E1108 00:09:39.365788 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:09:39.494854 containerd[1475]: time="2025-11-08T00:09:39.494783463Z" level=info msg="StopPodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\"" Nov 8 00:09:39.495765 containerd[1475]: time="2025-11-08T00:09:39.495715863Z" level=info msg="StopPodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\"" Nov 8 00:09:39.499319 containerd[1475]: time="2025-11-08T00:09:39.499263544Z" level=info msg="StopPodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\"" Nov 8 00:09:39.500265 containerd[1475]: time="2025-11-08T00:09:39.499671384Z" level=info msg="StopPodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\"" Nov 8 00:09:39.774511 systemd-networkd[1380]: cali0f4d69a431e: Gained IPv6LL Nov 8 00:09:39.819839 kubelet[2597]: E1108 00:09:39.819795 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:09:39.820475 kubelet[2597]: E1108 00:09:39.820275 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.690 [INFO][4461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.691 [INFO][4461] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" iface="eth0" netns="/var/run/netns/cni-225d33f5-e357-d543-64f4-e8e8b1e140bb" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.691 [INFO][4461] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" iface="eth0" netns="/var/run/netns/cni-225d33f5-e357-d543-64f4-e8e8b1e140bb" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.699 [INFO][4461] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" iface="eth0" netns="/var/run/netns/cni-225d33f5-e357-d543-64f4-e8e8b1e140bb" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.701 [INFO][4461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.704 [INFO][4461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.796 [INFO][4495] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.796 [INFO][4495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.796 [INFO][4495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.813 [WARNING][4495] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.814 [INFO][4495] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.819 [INFO][4495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:39.823478 containerd[1475]: 2025-11-08 00:09:39.822 [INFO][4461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:39.827223 containerd[1475]: time="2025-11-08T00:09:39.827071431Z" level=info msg="TearDown network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" successfully" Nov 8 00:09:39.827315 containerd[1475]: time="2025-11-08T00:09:39.827246352Z" level=info msg="StopPodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" returns successfully" Nov 8 00:09:39.829502 systemd[1]: run-netns-cni\x2d225d33f5\x2de357\x2dd543\x2d64f4\x2de8e8b1e140bb.mount: Deactivated successfully. Nov 8 00:09:39.830945 containerd[1475]: time="2025-11-08T00:09:39.830910880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssphr,Uid:ea25be93-453f-48e1-b6ef-e630315dd03d,Namespace:kube-system,Attempt:1,}" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.682 [INFO][4454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.682 [INFO][4454] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" iface="eth0" netns="/var/run/netns/cni-a87068d4-b576-7aca-b0c0-81935a1271a4" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.682 [INFO][4454] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" iface="eth0" netns="/var/run/netns/cni-a87068d4-b576-7aca-b0c0-81935a1271a4" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.682 [INFO][4454] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" iface="eth0" netns="/var/run/netns/cni-a87068d4-b576-7aca-b0c0-81935a1271a4" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.682 [INFO][4454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.682 [INFO][4454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.795 [INFO][4486] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.797 [INFO][4486] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.819 [INFO][4486] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.848 [WARNING][4486] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.849 [INFO][4486] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.866 [INFO][4486] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:39.878307 containerd[1475]: 2025-11-08 00:09:39.874 [INFO][4454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:39.883258 containerd[1475]: time="2025-11-08T00:09:39.883211675Z" level=info msg="TearDown network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" successfully" Nov 8 00:09:39.883258 containerd[1475]: time="2025-11-08T00:09:39.883253075Z" level=info msg="StopPodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" returns successfully" Nov 8 00:09:39.884782 containerd[1475]: time="2025-11-08T00:09:39.884473718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c6ph6,Uid:8c8208d0-d52c-4948-a7c4-1a012578a167,Namespace:calico-system,Attempt:1,}" Nov 8 00:09:39.884916 systemd[1]: run-netns-cni\x2da87068d4\x2db576\x2d7aca\x2db0c0\x2d81935a1271a4.mount: Deactivated successfully. Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.689 [INFO][4466] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.689 [INFO][4466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" iface="eth0" netns="/var/run/netns/cni-8d2b9251-c169-57da-fbb9-621583076b0c" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.689 [INFO][4466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" iface="eth0" netns="/var/run/netns/cni-8d2b9251-c169-57da-fbb9-621583076b0c" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.691 [INFO][4466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" iface="eth0" netns="/var/run/netns/cni-8d2b9251-c169-57da-fbb9-621583076b0c" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.691 [INFO][4466] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.691 [INFO][4466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.801 [INFO][4490] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.801 [INFO][4490] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.869 [INFO][4490] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.920 [WARNING][4490] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.920 [INFO][4490] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.926 [INFO][4490] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:39.933202 containerd[1475]: 2025-11-08 00:09:39.930 [INFO][4466] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:39.937042 containerd[1475]: time="2025-11-08T00:09:39.936982914Z" level=info msg="TearDown network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" successfully" Nov 8 00:09:39.937042 containerd[1475]: time="2025-11-08T00:09:39.937026354Z" level=info msg="StopPodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" returns successfully" Nov 8 00:09:39.937786 containerd[1475]: time="2025-11-08T00:09:39.937748916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4z57,Uid:8da5e4ab-3d6f-46a3-91d8-e794f2481a0b,Namespace:calico-system,Attempt:1,}" Nov 8 00:09:39.939700 systemd[1]: run-netns-cni\x2d8d2b9251\x2dc169\x2d57da\x2dfbb9\x2d621583076b0c.mount: Deactivated successfully. Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.691 [INFO][4463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.693 [INFO][4463] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" iface="eth0" netns="/var/run/netns/cni-e57ae5d3-7013-fad2-4b58-ca7f9f4adf21" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.695 [INFO][4463] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" iface="eth0" netns="/var/run/netns/cni-e57ae5d3-7013-fad2-4b58-ca7f9f4adf21" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.697 [INFO][4463] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" iface="eth0" netns="/var/run/netns/cni-e57ae5d3-7013-fad2-4b58-ca7f9f4adf21" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.697 [INFO][4463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.697 [INFO][4463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.805 [INFO][4492] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.807 [INFO][4492] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.926 [INFO][4492] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.943 [WARNING][4492] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.943 [INFO][4492] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.950 [INFO][4492] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:39.975328 containerd[1475]: 2025-11-08 00:09:39.968 [INFO][4463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:39.979185 containerd[1475]: time="2025-11-08T00:09:39.979037327Z" level=info msg="TearDown network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" successfully" Nov 8 00:09:39.979185 containerd[1475]: time="2025-11-08T00:09:39.979076527Z" level=info msg="StopPodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" returns successfully" Nov 8 00:09:39.982774 containerd[1475]: time="2025-11-08T00:09:39.982727535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs8fd,Uid:5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819,Namespace:kube-system,Attempt:1,}" Nov 8 00:09:40.178933 systemd-networkd[1380]: califace0f0681c: Link UP Nov 8 00:09:40.179260 systemd-networkd[1380]: califace0f0681c: Gained carrier Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:39.945 [INFO][4512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0 coredns-668d6bf9bc- kube-system ea25be93-453f-48e1-b6ef-e630315dd03d 961 0 2025-11-08 00:08:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 coredns-668d6bf9bc-ssphr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califace0f0681c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:39.945 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.055 [INFO][4538] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" HandleID="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.057 [INFO][4538] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" HandleID="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000339a20), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"coredns-668d6bf9bc-ssphr", "timestamp":"2025-11-08 00:09:40.055282528 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.057 [INFO][4538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.057 [INFO][4538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.057 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.082 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.090 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.101 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.104 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.109 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.109 [INFO][4538] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.113 [INFO][4538] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3 Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.121 [INFO][4538] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.157 [INFO][4538] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.69/26] block=192.168.72.64/26 handle="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.157 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.69/26] handle="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.157 [INFO][4538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:40.214762 containerd[1475]: 2025-11-08 00:09:40.157 [INFO][4538] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.69/26] IPv6=[] ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" HandleID="k8s-pod-network.54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.215430 containerd[1475]: 2025-11-08 00:09:40.166 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ea25be93-453f-48e1-b6ef-e630315dd03d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"coredns-668d6bf9bc-ssphr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace0f0681c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.215430 containerd[1475]: 2025-11-08 00:09:40.166 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.69/32] ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.215430 containerd[1475]: 2025-11-08 00:09:40.166 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califace0f0681c ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.215430 containerd[1475]: 2025-11-08 00:09:40.177 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.215430 containerd[1475]: 2025-11-08 00:09:40.178 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ea25be93-453f-48e1-b6ef-e630315dd03d", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3", Pod:"coredns-668d6bf9bc-ssphr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace0f0681c", MAC:"06:2b:ed:69:3f:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.215430 containerd[1475]: 2025-11-08 00:09:40.206 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3" Namespace="kube-system" Pod="coredns-668d6bf9bc-ssphr" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:40.268148 containerd[1475]: time="2025-11-08T00:09:40.267067445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:40.268148 containerd[1475]: time="2025-11-08T00:09:40.267149966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:40.268148 containerd[1475]: time="2025-11-08T00:09:40.267161767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.268677 containerd[1475]: time="2025-11-08T00:09:40.268428431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.290912 systemd-networkd[1380]: calidf9b0f5909b: Link UP Nov 8 00:09:40.297213 systemd-networkd[1380]: calidf9b0f5909b: Gained carrier Nov 8 00:09:40.331373 systemd[1]: Started cri-containerd-54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3.scope - libcontainer container 54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3. Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.050 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0 goldmane-666569f655- calico-system 8c8208d0-d52c-4948-a7c4-1a012578a167 960 0 2025-11-08 00:09:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 goldmane-666569f655-c6ph6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidf9b0f5909b [] [] }} ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.051 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.170 [INFO][4567] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" HandleID="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.170 [INFO][4567] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" HandleID="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000371c10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"goldmane-666569f655-c6ph6", "timestamp":"2025-11-08 00:09:40.1702664 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.170 [INFO][4567] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.170 [INFO][4567] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.170 [INFO][4567] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.200 [INFO][4567] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.216 [INFO][4567] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.231 [INFO][4567] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.235 [INFO][4567] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.241 [INFO][4567] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.242 [INFO][4567] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.246 [INFO][4567] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.256 [INFO][4567] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.273 [INFO][4567] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.70/26] block=192.168.72.64/26 handle="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.273 [INFO][4567] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.70/26] handle="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.273 [INFO][4567] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:40.334384 containerd[1475]: 2025-11-08 00:09:40.273 [INFO][4567] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.70/26] IPv6=[] ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" HandleID="k8s-pod-network.afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.335639 containerd[1475]: 2025-11-08 00:09:40.282 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8c8208d0-d52c-4948-a7c4-1a012578a167", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"goldmane-666569f655-c6ph6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf9b0f5909b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.335639 containerd[1475]: 2025-11-08 00:09:40.282 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.70/32] ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.335639 containerd[1475]: 2025-11-08 00:09:40.282 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf9b0f5909b ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.335639 containerd[1475]: 2025-11-08 00:09:40.305 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.335639 containerd[1475]: 2025-11-08 00:09:40.305 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8c8208d0-d52c-4948-a7c4-1a012578a167", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c", Pod:"goldmane-666569f655-c6ph6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf9b0f5909b", MAC:"e6:61:7e:bf:e4:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.335639 containerd[1475]: 2025-11-08 00:09:40.328 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c" Namespace="calico-system" Pod="goldmane-666569f655-c6ph6" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:40.386340 containerd[1475]: time="2025-11-08T00:09:40.386048313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:40.386576 containerd[1475]: time="2025-11-08T00:09:40.386536002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:40.386862 containerd[1475]: time="2025-11-08T00:09:40.386722806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.387804 containerd[1475]: time="2025-11-08T00:09:40.387401298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.409767 containerd[1475]: time="2025-11-08T00:09:40.409525480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ssphr,Uid:ea25be93-453f-48e1-b6ef-e630315dd03d,Namespace:kube-system,Attempt:1,} returns sandbox id \"54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3\"" Nov 8 00:09:40.424229 containerd[1475]: time="2025-11-08T00:09:40.423974116Z" level=info msg="CreateContainer within sandbox \"54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:09:40.441756 systemd[1]: Started cri-containerd-afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c.scope - libcontainer container afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c. Nov 8 00:09:40.463362 systemd-networkd[1380]: cali81e80ad88a4: Link UP Nov 8 00:09:40.463614 systemd-networkd[1380]: cali81e80ad88a4: Gained carrier Nov 8 00:09:40.510541 containerd[1475]: time="2025-11-08T00:09:40.510468324Z" level=info msg="CreateContainer within sandbox \"54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8d25b8c5d3c71d46e78bdb821c1dad667d9680894dc640b589c1071876e1f5de\"" Nov 8 00:09:40.512045 containerd[1475]: time="2025-11-08T00:09:40.511731468Z" level=info msg="StartContainer for \"8d25b8c5d3c71d46e78bdb821c1dad667d9680894dc640b589c1071876e1f5de\"" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.160 [INFO][4544] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0 coredns-668d6bf9bc- kube-system 5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819 962 0 2025-11-08 00:08:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 coredns-668d6bf9bc-xs8fd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81e80ad88a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.160 [INFO][4544] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.263 [INFO][4583] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" HandleID="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.267 [INFO][4583] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" HandleID="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa760), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"coredns-668d6bf9bc-xs8fd", "timestamp":"2025-11-08 00:09:40.263631739 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.267 [INFO][4583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.274 [INFO][4583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.274 [INFO][4583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.312 [INFO][4583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.341 [INFO][4583] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.355 [INFO][4583] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.362 [INFO][4583] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.373 [INFO][4583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.373 [INFO][4583] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.378 [INFO][4583] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.393 [INFO][4583] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.425 [INFO][4583] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.71/26] block=192.168.72.64/26 handle="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.425 [INFO][4583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.71/26] handle="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.426 [INFO][4583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:40.524919 containerd[1475]: 2025-11-08 00:09:40.426 [INFO][4583] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.71/26] IPv6=[] ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" HandleID="k8s-pod-network.3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.525512 containerd[1475]: 2025-11-08 00:09:40.440 [INFO][4544] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"coredns-668d6bf9bc-xs8fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81e80ad88a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.525512 containerd[1475]: 2025-11-08 00:09:40.441 [INFO][4544] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.71/32] ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.525512 containerd[1475]: 2025-11-08 00:09:40.443 [INFO][4544] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81e80ad88a4 ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.525512 containerd[1475]: 2025-11-08 00:09:40.485 [INFO][4544] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.525512 containerd[1475]: 2025-11-08 00:09:40.487 [INFO][4544] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce", Pod:"coredns-668d6bf9bc-xs8fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81e80ad88a4", MAC:"92:4b:4c:ec:0e:a9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.525512 containerd[1475]: 2025-11-08 00:09:40.518 [INFO][4544] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-xs8fd" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:40.541401 systemd-networkd[1380]: cali04d43bf3b05: Gained IPv6LL Nov 8 00:09:40.562732 systemd[1]: Started cri-containerd-8d25b8c5d3c71d46e78bdb821c1dad667d9680894dc640b589c1071876e1f5de.scope - libcontainer container 8d25b8c5d3c71d46e78bdb821c1dad667d9680894dc640b589c1071876e1f5de. Nov 8 00:09:40.601248 containerd[1475]: time="2025-11-08T00:09:40.601086211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:40.601248 containerd[1475]: time="2025-11-08T00:09:40.601167653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:40.601248 containerd[1475]: time="2025-11-08T00:09:40.601183293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.601467 containerd[1475]: time="2025-11-08T00:09:40.601272815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.624651 systemd-networkd[1380]: cali7213c6013d8: Link UP Nov 8 00:09:40.628479 systemd-networkd[1380]: cali7213c6013d8: Gained carrier Nov 8 00:09:40.657312 systemd[1]: Started cri-containerd-3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce.scope - libcontainer container 3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce. Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.164 [INFO][4556] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0 csi-node-driver- calico-system 8da5e4ab-3d6f-46a3-91d8-e794f2481a0b 963 0 2025-11-08 00:09:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-6-n-fb20dfd731 csi-node-driver-l4z57 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7213c6013d8 [] [] }} ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.165 [INFO][4556] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.287 [INFO][4588] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" HandleID="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.292 [INFO][4588] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" HandleID="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315970), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-6-n-fb20dfd731", "pod":"csi-node-driver-l4z57", "timestamp":"2025-11-08 00:09:40.287124107 +0000 UTC"}, Hostname:"ci-4081-3-6-n-fb20dfd731", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.292 [INFO][4588] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.432 [INFO][4588] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.432 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-6-n-fb20dfd731' Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.481 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.495 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.531 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.552 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.566 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.72.64/26 host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.566 [INFO][4588] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.72.64/26 handle="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.570 [INFO][4588] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.587 [INFO][4588] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.72.64/26 handle="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.602 [INFO][4588] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.72.72/26] block=192.168.72.64/26 handle="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.603 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.72.72/26] handle="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" host="ci-4081-3-6-n-fb20dfd731" Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.603 [INFO][4588] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:40.678604 containerd[1475]: 2025-11-08 00:09:40.603 [INFO][4588] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.72.72/26] IPv6=[] ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" HandleID="k8s-pod-network.17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.679157 containerd[1475]: 2025-11-08 00:09:40.612 [INFO][4556] cni-plugin/k8s.go 418: Populated endpoint ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"", Pod:"csi-node-driver-l4z57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7213c6013d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.679157 containerd[1475]: 2025-11-08 00:09:40.612 [INFO][4556] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.72.72/32] ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.679157 containerd[1475]: 2025-11-08 00:09:40.612 [INFO][4556] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7213c6013d8 ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.679157 containerd[1475]: 2025-11-08 00:09:40.626 [INFO][4556] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.679157 containerd[1475]: 2025-11-08 00:09:40.628 [INFO][4556] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b", Pod:"csi-node-driver-l4z57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7213c6013d8", MAC:"86:84:d4:cb:7c:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:40.679157 containerd[1475]: 2025-11-08 00:09:40.671 [INFO][4556] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b" Namespace="calico-system" Pod="csi-node-driver-l4z57" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:40.684461 containerd[1475]: time="2025-11-08T00:09:40.683527143Z" level=info msg="StartContainer for \"8d25b8c5d3c71d46e78bdb821c1dad667d9680894dc640b589c1071876e1f5de\" returns successfully" Nov 8 00:09:40.714219 containerd[1475]: time="2025-11-08T00:09:40.712822421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:09:40.714219 containerd[1475]: time="2025-11-08T00:09:40.713885961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:09:40.714219 containerd[1475]: time="2025-11-08T00:09:40.713899682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.714219 containerd[1475]: time="2025-11-08T00:09:40.713980403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:09:40.739609 systemd[1]: Started cri-containerd-17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b.scope - libcontainer container 17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b. Nov 8 00:09:40.757035 containerd[1475]: time="2025-11-08T00:09:40.756996303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xs8fd,Uid:5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819,Namespace:kube-system,Attempt:1,} returns sandbox id \"3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce\"" Nov 8 00:09:40.763072 containerd[1475]: time="2025-11-08T00:09:40.763033618Z" level=info msg="CreateContainer within sandbox \"3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:09:40.788211 containerd[1475]: time="2025-11-08T00:09:40.788036655Z" level=info msg="CreateContainer within sandbox \"3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bdd35c33b2623518e162181d7d11123e1046844cd0502a54c3cd56289904ea3d\"" Nov 8 00:09:40.794201 containerd[1475]: time="2025-11-08T00:09:40.794060889Z" level=info msg="StartContainer for \"bdd35c33b2623518e162181d7d11123e1046844cd0502a54c3cd56289904ea3d\"" Nov 8 00:09:40.839744 systemd[1]: run-netns-cni\x2de57ae5d3\x2d7013\x2dfad2\x2d4b58\x2dca7f9f4adf21.mount: Deactivated successfully. Nov 8 00:09:40.857202 kubelet[2597]: E1108 00:09:40.856805 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:09:40.870849 systemd[1]: Started cri-containerd-bdd35c33b2623518e162181d7d11123e1046844cd0502a54c3cd56289904ea3d.scope - libcontainer container bdd35c33b2623518e162181d7d11123e1046844cd0502a54c3cd56289904ea3d. Nov 8 00:09:40.885707 containerd[1475]: time="2025-11-08T00:09:40.885648075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-c6ph6,Uid:8c8208d0-d52c-4948-a7c4-1a012578a167,Namespace:calico-system,Attempt:1,} returns sandbox id \"afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c\"" Nov 8 00:09:40.892620 containerd[1475]: time="2025-11-08T00:09:40.892578287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:09:40.926846 kubelet[2597]: I1108 00:09:40.926777 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ssphr" podStartSLOduration=47.926755739 podStartE2EDuration="47.926755739s" podCreationTimestamp="2025-11-08 00:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:09:40.905386531 +0000 UTC m=+53.544465027" watchObservedRunningTime="2025-11-08 00:09:40.926755739 +0000 UTC m=+53.565834235" Nov 8 00:09:40.948856 containerd[1475]: time="2025-11-08T00:09:40.948805759Z" level=info msg="StartContainer for \"bdd35c33b2623518e162181d7d11123e1046844cd0502a54c3cd56289904ea3d\" returns successfully" Nov 8 00:09:41.014151 containerd[1475]: time="2025-11-08T00:09:41.014072756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l4z57,Uid:8da5e4ab-3d6f-46a3-91d8-e794f2481a0b,Namespace:calico-system,Attempt:1,} returns sandbox id \"17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b\"" Nov 8 00:09:41.246398 systemd-networkd[1380]: califace0f0681c: Gained IPv6LL Nov 8 00:09:41.255492 containerd[1475]: time="2025-11-08T00:09:41.255429348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:41.256805 containerd[1475]: time="2025-11-08T00:09:41.256729933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:09:41.256876 containerd[1475]: time="2025-11-08T00:09:41.256843975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:41.257521 kubelet[2597]: E1108 00:09:41.257068 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:41.257521 kubelet[2597]: E1108 00:09:41.257137 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:41.257647 containerd[1475]: time="2025-11-08T00:09:41.257499307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:09:41.257717 kubelet[2597]: E1108 00:09:41.257439 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:41.259565 kubelet[2597]: E1108 00:09:41.259400 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:09:41.620529 containerd[1475]: time="2025-11-08T00:09:41.620264869Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:41.622145 containerd[1475]: time="2025-11-08T00:09:41.621898379Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:09:41.622145 containerd[1475]: time="2025-11-08T00:09:41.621943580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:09:41.622780 kubelet[2597]: E1108 00:09:41.622527 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:41.622780 kubelet[2597]: E1108 00:09:41.622603 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:41.624010 kubelet[2597]: E1108 00:09:41.623938 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:41.626477 containerd[1475]: time="2025-11-08T00:09:41.626314741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:09:41.875596 kubelet[2597]: E1108 00:09:41.875477 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:09:41.912915 kubelet[2597]: I1108 00:09:41.912618 2597 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xs8fd" podStartSLOduration=48.912598286 podStartE2EDuration="48.912598286s" podCreationTimestamp="2025-11-08 00:08:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:09:41.912521245 +0000 UTC m=+54.551599741" watchObservedRunningTime="2025-11-08 00:09:41.912598286 +0000 UTC m=+54.551676782" Nov 8 00:09:41.949414 systemd-networkd[1380]: cali81e80ad88a4: Gained IPv6LL Nov 8 00:09:41.981588 containerd[1475]: time="2025-11-08T00:09:41.981526003Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:41.982998 containerd[1475]: time="2025-11-08T00:09:41.982948510Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:09:41.983088 containerd[1475]: time="2025-11-08T00:09:41.983052552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:09:41.983450 kubelet[2597]: E1108 00:09:41.983204 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:41.983450 kubelet[2597]: E1108 00:09:41.983258 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:41.983450 kubelet[2597]: E1108 00:09:41.983395 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:41.984774 kubelet[2597]: E1108 00:09:41.984719 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:42.077422 systemd-networkd[1380]: cali7213c6013d8: Gained IPv6LL Nov 8 00:09:42.141364 systemd-networkd[1380]: calidf9b0f5909b: Gained IPv6LL Nov 8 00:09:42.881896 kubelet[2597]: E1108 00:09:42.881835 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:09:42.882941 kubelet[2597]: E1108 00:09:42.881496 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:09:47.497112 containerd[1475]: time="2025-11-08T00:09:47.496683172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:09:47.510780 containerd[1475]: time="2025-11-08T00:09:47.510738832Z" level=info msg="StopPodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\"" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.581 [WARNING][4904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ea25be93-453f-48e1-b6ef-e630315dd03d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3", Pod:"coredns-668d6bf9bc-ssphr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace0f0681c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.581 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.581 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" iface="eth0" netns="" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.581 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.581 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.619 [INFO][4912] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.620 [INFO][4912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.620 [INFO][4912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.635 [WARNING][4912] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.635 [INFO][4912] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.644 [INFO][4912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:47.652261 containerd[1475]: 2025-11-08 00:09:47.648 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.652261 containerd[1475]: time="2025-11-08T00:09:47.651259516Z" level=info msg="TearDown network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" successfully" Nov 8 00:09:47.652261 containerd[1475]: time="2025-11-08T00:09:47.651290797Z" level=info msg="StopPodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" returns successfully" Nov 8 00:09:47.655172 containerd[1475]: time="2025-11-08T00:09:47.653729595Z" level=info msg="RemovePodSandbox for \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\"" Nov 8 00:09:47.657381 containerd[1475]: time="2025-11-08T00:09:47.657332771Z" level=info msg="Forcibly stopping sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\"" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.718 [WARNING][4926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ea25be93-453f-48e1-b6ef-e630315dd03d", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"54016f06e18fe20dd571506374dd78cf6abf50bc5fa9d66b79cf08348d9e28f3", Pod:"coredns-668d6bf9bc-ssphr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califace0f0681c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.718 [INFO][4926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.718 [INFO][4926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" iface="eth0" netns="" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.719 [INFO][4926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.719 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.775 [INFO][4933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.775 [INFO][4933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.775 [INFO][4933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.789 [WARNING][4933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.789 [INFO][4933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" HandleID="k8s-pod-network.c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--ssphr-eth0" Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.791 [INFO][4933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:47.799887 containerd[1475]: 2025-11-08 00:09:47.796 [INFO][4926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f" Nov 8 00:09:47.800491 containerd[1475]: time="2025-11-08T00:09:47.799117795Z" level=info msg="TearDown network for sandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" successfully" Nov 8 00:09:47.808411 containerd[1475]: time="2025-11-08T00:09:47.808014615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:47.808411 containerd[1475]: time="2025-11-08T00:09:47.808160097Z" level=info msg="RemovePodSandbox \"c25381bf5676158626aa4d4c77468ddcc8381eae410165e7e94d675c0963d28f\" returns successfully" Nov 8 00:09:47.809710 containerd[1475]: time="2025-11-08T00:09:47.809397556Z" level=info msg="StopPodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\"" Nov 8 00:09:47.849602 containerd[1475]: time="2025-11-08T00:09:47.849546586Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:47.853165 containerd[1475]: time="2025-11-08T00:09:47.852290069Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:09:47.853415 containerd[1475]: time="2025-11-08T00:09:47.852328790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:09:47.854183 kubelet[2597]: E1108 00:09:47.853548 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:47.854183 kubelet[2597]: E1108 00:09:47.853601 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:09:47.854183 kubelet[2597]: E1108 00:09:47.853704 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fef3b1f193134241bf57d7645ef8585e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:47.859181 containerd[1475]: time="2025-11-08T00:09:47.858901493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.870 [WARNING][4949] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc4e8de7-6ddd-43cb-ba37-33083ff72076", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04", Pod:"calico-apiserver-5b79fdfd8b-6tnl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d43bf3b05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.870 [INFO][4949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.870 [INFO][4949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" iface="eth0" netns="" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.870 [INFO][4949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.870 [INFO][4949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.907 [INFO][4956] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.907 [INFO][4956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.907 [INFO][4956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.925 [WARNING][4956] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.925 [INFO][4956] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.928 [INFO][4956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:47.935639 containerd[1475]: 2025-11-08 00:09:47.932 [INFO][4949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:47.937224 containerd[1475]: time="2025-11-08T00:09:47.935837979Z" level=info msg="TearDown network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" successfully" Nov 8 00:09:47.937224 containerd[1475]: time="2025-11-08T00:09:47.935872020Z" level=info msg="StopPodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" returns successfully" Nov 8 00:09:47.937224 containerd[1475]: time="2025-11-08T00:09:47.936984517Z" level=info msg="RemovePodSandbox for \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\"" Nov 8 00:09:47.937224 containerd[1475]: time="2025-11-08T00:09:47.937016718Z" level=info msg="Forcibly stopping sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\"" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.001 [WARNING][4970] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc4e8de7-6ddd-43cb-ba37-33083ff72076", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"fd9018df17ba526a6d81a83cf7a812190595d866f4253bf022bf2b91c65b8f04", Pod:"calico-apiserver-5b79fdfd8b-6tnl5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali04d43bf3b05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.002 [INFO][4970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.002 [INFO][4970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" iface="eth0" netns="" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.002 [INFO][4970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.002 [INFO][4970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.028 [INFO][4977] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.028 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.028 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.056 [WARNING][4977] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.056 [INFO][4977] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" HandleID="k8s-pod-network.f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--6tnl5-eth0" Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.059 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.066059 containerd[1475]: 2025-11-08 00:09:48.062 [INFO][4970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7" Nov 8 00:09:48.066059 containerd[1475]: time="2025-11-08T00:09:48.065056339Z" level=info msg="TearDown network for sandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" successfully" Nov 8 00:09:48.069622 containerd[1475]: time="2025-11-08T00:09:48.069578088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:48.069768 containerd[1475]: time="2025-11-08T00:09:48.069642409Z" level=info msg="RemovePodSandbox \"f0166007c694934db0ee4ce048a6ae1d5c05589ff56f9aa377c64b3eeeec35e7\" returns successfully" Nov 8 00:09:48.070249 containerd[1475]: time="2025-11-08T00:09:48.070227298Z" level=info msg="StopPodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\"" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.134 [WARNING][4991] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.134 [INFO][4991] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.134 [INFO][4991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" iface="eth0" netns="" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.134 [INFO][4991] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.134 [INFO][4991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.168 [INFO][4998] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.168 [INFO][4998] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.168 [INFO][4998] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.184 [WARNING][4998] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.184 [INFO][4998] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.187 [INFO][4998] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.191360 containerd[1475]: 2025-11-08 00:09:48.189 [INFO][4991] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.192477 containerd[1475]: time="2025-11-08T00:09:48.191495668Z" level=info msg="TearDown network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" successfully" Nov 8 00:09:48.192477 containerd[1475]: time="2025-11-08T00:09:48.191530109Z" level=info msg="StopPodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" returns successfully" Nov 8 00:09:48.192477 containerd[1475]: time="2025-11-08T00:09:48.192349002Z" level=info msg="RemovePodSandbox for \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\"" Nov 8 00:09:48.192477 containerd[1475]: time="2025-11-08T00:09:48.192379882Z" level=info msg="Forcibly stopping sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\"" Nov 8 00:09:48.209479 containerd[1475]: time="2025-11-08T00:09:48.209263660Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:48.212552 containerd[1475]: time="2025-11-08T00:09:48.212353187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:09:48.213270 kubelet[2597]: E1108 00:09:48.213074 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:48.213270 kubelet[2597]: E1108 00:09:48.213154 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:09:48.213427 kubelet[2597]: E1108 00:09:48.213270 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:48.214269 containerd[1475]: time="2025-11-08T00:09:48.213724488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:48.216018 kubelet[2597]: E1108 00:09:48.215956 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.254 [WARNING][5012] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" WorkloadEndpoint="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.255 [INFO][5012] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.255 [INFO][5012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" iface="eth0" netns="" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.255 [INFO][5012] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.255 [INFO][5012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.280 [INFO][5019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.280 [INFO][5019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.280 [INFO][5019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.294 [WARNING][5019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.294 [INFO][5019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" HandleID="k8s-pod-network.cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Workload="ci--4081--3--6--n--fb20dfd731-k8s-whisker--7549cdfd84--tb95c-eth0" Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.296 [INFO][5019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.303239 containerd[1475]: 2025-11-08 00:09:48.299 [INFO][5012] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297" Nov 8 00:09:48.303865 containerd[1475]: time="2025-11-08T00:09:48.303280974Z" level=info msg="TearDown network for sandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" successfully" Nov 8 00:09:48.311064 containerd[1475]: time="2025-11-08T00:09:48.310872570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:48.311064 containerd[1475]: time="2025-11-08T00:09:48.310948971Z" level=info msg="RemovePodSandbox \"cdc20c8dfaab42ad928f93b1426d353100cf9b66fc4300cde0b77fbd48cb0297\" returns successfully" Nov 8 00:09:48.312176 containerd[1475]: time="2025-11-08T00:09:48.311937186Z" level=info msg="StopPodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\"" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.359 [WARNING][5033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b", Pod:"csi-node-driver-l4z57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7213c6013d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.360 [INFO][5033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.360 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" iface="eth0" netns="" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.360 [INFO][5033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.360 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.390 [INFO][5041] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.391 [INFO][5041] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.392 [INFO][5041] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.413 [WARNING][5041] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.413 [INFO][5041] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.415 [INFO][5041] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.419286 containerd[1475]: 2025-11-08 00:09:48.417 [INFO][5033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.420081 containerd[1475]: time="2025-11-08T00:09:48.419386026Z" level=info msg="TearDown network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" successfully" Nov 8 00:09:48.420081 containerd[1475]: time="2025-11-08T00:09:48.419958195Z" level=info msg="StopPodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" returns successfully" Nov 8 00:09:48.420598 containerd[1475]: time="2025-11-08T00:09:48.420520283Z" level=info msg="RemovePodSandbox for \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\"" Nov 8 00:09:48.420872 containerd[1475]: time="2025-11-08T00:09:48.420711966Z" level=info msg="Forcibly stopping sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\"" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.465 [WARNING][5056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8da5e4ab-3d6f-46a3-91d8-e794f2481a0b", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"17a4e8d633c5ea614852478316716090d544d2d7cefe3a11d0356e646a30896b", Pod:"csi-node-driver-l4z57", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.72.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7213c6013d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.465 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.465 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" iface="eth0" netns="" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.465 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.465 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.490 [INFO][5064] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.490 [INFO][5064] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.490 [INFO][5064] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.501 [WARNING][5064] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.501 [INFO][5064] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" HandleID="k8s-pod-network.a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Workload="ci--4081--3--6--n--fb20dfd731-k8s-csi--node--driver--l4z57-eth0" Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.503 [INFO][5064] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.507508 containerd[1475]: 2025-11-08 00:09:48.505 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766" Nov 8 00:09:48.509793 containerd[1475]: time="2025-11-08T00:09:48.509152956Z" level=info msg="TearDown network for sandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" successfully" Nov 8 00:09:48.514485 containerd[1475]: time="2025-11-08T00:09:48.513737746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:48.514485 containerd[1475]: time="2025-11-08T00:09:48.513814667Z" level=info msg="RemovePodSandbox \"a3e04ee456d8c1cdb1665bc126985c4e5abaf9fd9a96eb6b5ba469e850264766\" returns successfully" Nov 8 00:09:48.515188 containerd[1475]: time="2025-11-08T00:09:48.514853283Z" level=info msg="StopPodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\"" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.584 [WARNING][5079] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce", Pod:"coredns-668d6bf9bc-xs8fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81e80ad88a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.585 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.585 [INFO][5079] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" iface="eth0" netns="" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.585 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.585 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.625 [INFO][5086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.627 [INFO][5086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.627 [INFO][5086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.653 [WARNING][5086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.653 [INFO][5086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.656 [INFO][5086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.660621 containerd[1475]: 2025-11-08 00:09:48.659 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.662840 containerd[1475]: time="2025-11-08T00:09:48.662251452Z" level=info msg="TearDown network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" successfully" Nov 8 00:09:48.662840 containerd[1475]: time="2025-11-08T00:09:48.662287533Z" level=info msg="StopPodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" returns successfully" Nov 8 00:09:48.663145 containerd[1475]: time="2025-11-08T00:09:48.663038384Z" level=info msg="RemovePodSandbox for \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\"" Nov 8 00:09:48.663145 containerd[1475]: time="2025-11-08T00:09:48.663074025Z" level=info msg="Forcibly stopping sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\"" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.729 [WARNING][5100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5c9f9ae4-8fa2-4cf9-9f3d-b855fdc3f819", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 8, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"3b5284833526d0250936c0b4ddd4546c02fb88e5e3915ed8eb448d603294f6ce", Pod:"coredns-668d6bf9bc-xs8fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81e80ad88a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.729 [INFO][5100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.729 [INFO][5100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" iface="eth0" netns="" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.729 [INFO][5100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.729 [INFO][5100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.770 [INFO][5107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.770 [INFO][5107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.770 [INFO][5107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.796 [WARNING][5107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.797 [INFO][5107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" HandleID="k8s-pod-network.468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Workload="ci--4081--3--6--n--fb20dfd731-k8s-coredns--668d6bf9bc--xs8fd-eth0" Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.801 [INFO][5107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.806995 containerd[1475]: 2025-11-08 00:09:48.803 [INFO][5100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28" Nov 8 00:09:48.806995 containerd[1475]: time="2025-11-08T00:09:48.805491798Z" level=info msg="TearDown network for sandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" successfully" Nov 8 00:09:48.818109 containerd[1475]: time="2025-11-08T00:09:48.818045070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:48.818338 containerd[1475]: time="2025-11-08T00:09:48.818116631Z" level=info msg="RemovePodSandbox \"468ded43a90e079e7ee1ef11c8192b6004d1c90449ac48d0362295b91c2a2b28\" returns successfully" Nov 8 00:09:48.820293 containerd[1475]: time="2025-11-08T00:09:48.819908098Z" level=info msg="StopPodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\"" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.865 [WARNING][5121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0", GenerateName:"calico-kube-controllers-7c4f66b45d-", Namespace:"calico-system", SelfLink:"", UID:"d5333c3e-0b85-42ef-9987-96412959a46c", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c4f66b45d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66", Pod:"calico-kube-controllers-7c4f66b45d-rgxhv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f4d69a431e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.866 [INFO][5121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.866 [INFO][5121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" iface="eth0" netns="" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.866 [INFO][5121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.866 [INFO][5121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.898 [INFO][5128] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.899 [INFO][5128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.899 [INFO][5128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.912 [WARNING][5128] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.912 [INFO][5128] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.916 [INFO][5128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:48.920872 containerd[1475]: 2025-11-08 00:09:48.919 [INFO][5121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:48.920872 containerd[1475]: time="2025-11-08T00:09:48.920743917Z" level=info msg="TearDown network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" successfully" Nov 8 00:09:48.920872 containerd[1475]: time="2025-11-08T00:09:48.920769317Z" level=info msg="StopPodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" returns successfully" Nov 8 00:09:48.922602 containerd[1475]: time="2025-11-08T00:09:48.922562345Z" level=info msg="RemovePodSandbox for \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\"" Nov 8 00:09:48.922602 containerd[1475]: time="2025-11-08T00:09:48.922600145Z" level=info msg="Forcibly stopping sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\"" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:48.967 [WARNING][5143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0", GenerateName:"calico-kube-controllers-7c4f66b45d-", Namespace:"calico-system", SelfLink:"", UID:"d5333c3e-0b85-42ef-9987-96412959a46c", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c4f66b45d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"223cfccba2e0e68bd760053258f9a1490a3e74c4a9f40ae06231396b68836d66", Pod:"calico-kube-controllers-7c4f66b45d-rgxhv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0f4d69a431e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:48.968 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:48.968 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" iface="eth0" netns="" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:48.968 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:48.968 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.003 [INFO][5150] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.003 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.003 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.012 [WARNING][5150] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.012 [INFO][5150] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" HandleID="k8s-pod-network.eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--kube--controllers--7c4f66b45d--rgxhv-eth0" Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.015 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:49.020239 containerd[1475]: 2025-11-08 00:09:49.017 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5" Nov 8 00:09:49.020723 containerd[1475]: time="2025-11-08T00:09:49.020285188Z" level=info msg="TearDown network for sandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" successfully" Nov 8 00:09:49.025608 containerd[1475]: time="2025-11-08T00:09:49.025543266Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:49.025741 containerd[1475]: time="2025-11-08T00:09:49.025620787Z" level=info msg="RemovePodSandbox \"eda08aa50e13022494e252b521916ec4ee7e4b1b8c7ba1ed6e25a56a386e7ec5\" returns successfully" Nov 8 00:09:49.026432 containerd[1475]: time="2025-11-08T00:09:49.026357598Z" level=info msg="StopPodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\"" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.076 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346", Pod:"calico-apiserver-5b79fdfd8b-4hxxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7887ab0af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.077 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.077 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" iface="eth0" netns="" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.077 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.077 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.099 [INFO][5172] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.099 [INFO][5172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.099 [INFO][5172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.113 [WARNING][5172] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.113 [INFO][5172] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.116 [INFO][5172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:49.121272 containerd[1475]: 2025-11-08 00:09:49.118 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.121272 containerd[1475]: time="2025-11-08T00:09:49.121082965Z" level=info msg="TearDown network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" successfully" Nov 8 00:09:49.121272 containerd[1475]: time="2025-11-08T00:09:49.121109325Z" level=info msg="StopPodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" returns successfully" Nov 8 00:09:49.122608 containerd[1475]: time="2025-11-08T00:09:49.122582867Z" level=info msg="RemovePodSandbox for \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\"" Nov 8 00:09:49.122684 containerd[1475]: time="2025-11-08T00:09:49.122614947Z" level=info msg="Forcibly stopping sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\"" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.168 [WARNING][5186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0", GenerateName:"calico-apiserver-5b79fdfd8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b79fdfd8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"1c91ca15b3907238d5db809b3f0d48ebdc456991b33c31a89ea8ae8ea75ba346", Pod:"calico-apiserver-5b79fdfd8b-4hxxc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7887ab0af6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.169 [INFO][5186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.169 [INFO][5186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" iface="eth0" netns="" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.169 [INFO][5186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.169 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.195 [INFO][5193] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.195 [INFO][5193] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.195 [INFO][5193] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.209 [WARNING][5193] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.209 [INFO][5193] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" HandleID="k8s-pod-network.6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Workload="ci--4081--3--6--n--fb20dfd731-k8s-calico--apiserver--5b79fdfd8b--4hxxc-eth0" Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.211 [INFO][5193] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:49.214485 containerd[1475]: 2025-11-08 00:09:49.212 [INFO][5186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437" Nov 8 00:09:49.215111 containerd[1475]: time="2025-11-08T00:09:49.214801436Z" level=info msg="TearDown network for sandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" successfully" Nov 8 00:09:49.219641 containerd[1475]: time="2025-11-08T00:09:49.219424305Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:49.219641 containerd[1475]: time="2025-11-08T00:09:49.219537067Z" level=info msg="RemovePodSandbox \"6da94ef417e74ceb051896893232ad3e48bf14611a1420ad88c5af1cf7b7a437\" returns successfully" Nov 8 00:09:49.220288 containerd[1475]: time="2025-11-08T00:09:49.220009074Z" level=info msg="StopPodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\"" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.272 [WARNING][5207] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8c8208d0-d52c-4948-a7c4-1a012578a167", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c", Pod:"goldmane-666569f655-c6ph6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf9b0f5909b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.272 [INFO][5207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.272 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" iface="eth0" netns="" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.272 [INFO][5207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.272 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.300 [INFO][5214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.300 [INFO][5214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.300 [INFO][5214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.314 [WARNING][5214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.314 [INFO][5214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.318 [INFO][5214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:49.323715 containerd[1475]: 2025-11-08 00:09:49.320 [INFO][5207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.325405 containerd[1475]: time="2025-11-08T00:09:49.324034418Z" level=info msg="TearDown network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" successfully" Nov 8 00:09:49.325405 containerd[1475]: time="2025-11-08T00:09:49.324071299Z" level=info msg="StopPodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" returns successfully" Nov 8 00:09:49.328779 containerd[1475]: time="2025-11-08T00:09:49.327711593Z" level=info msg="RemovePodSandbox for \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\"" Nov 8 00:09:49.328779 containerd[1475]: time="2025-11-08T00:09:49.327758634Z" level=info msg="Forcibly stopping sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\"" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.371 [WARNING][5228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8c8208d0-d52c-4948-a7c4-1a012578a167", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 9, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-6-n-fb20dfd731", ContainerID:"afa8af2f11f29b99e41417c11cf92cfabc3211ae8f045a9b1b518b8d4cd00d9c", Pod:"goldmane-666569f655-c6ph6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.72.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidf9b0f5909b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.372 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.372 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" iface="eth0" netns="" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.372 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.372 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.409 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.410 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.410 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.424 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.424 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" HandleID="k8s-pod-network.3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Workload="ci--4081--3--6--n--fb20dfd731-k8s-goldmane--666569f655--c6ph6-eth0" Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.427 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:09:49.432622 containerd[1475]: 2025-11-08 00:09:49.429 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc" Nov 8 00:09:49.434983 containerd[1475]: time="2025-11-08T00:09:49.433254760Z" level=info msg="TearDown network for sandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" successfully" Nov 8 00:09:49.440502 containerd[1475]: time="2025-11-08T00:09:49.440453107Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:09:49.440895 containerd[1475]: time="2025-11-08T00:09:49.440788952Z" level=info msg="RemovePodSandbox \"3fce18e20594e074b75bc45bff0034f1ace4f0a393a0855062cf6f44f995eedc\" returns successfully" Nov 8 00:09:52.493410 containerd[1475]: time="2025-11-08T00:09:52.493346850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:52.849976 containerd[1475]: time="2025-11-08T00:09:52.848360751Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:52.851662 containerd[1475]: time="2025-11-08T00:09:52.851406353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:52.851662 containerd[1475]: time="2025-11-08T00:09:52.851581035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:52.852625 kubelet[2597]: E1108 00:09:52.851914 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:52.852625 kubelet[2597]: E1108 00:09:52.851982 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:52.854965 kubelet[2597]: E1108 00:09:52.854677 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:52.856156 kubelet[2597]: E1108 00:09:52.856054 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:09:54.492968 containerd[1475]: time="2025-11-08T00:09:54.492928078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:09:54.856339 containerd[1475]: time="2025-11-08T00:09:54.856067271Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:54.858119 containerd[1475]: time="2025-11-08T00:09:54.857889695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:09:54.858119 containerd[1475]: time="2025-11-08T00:09:54.858043697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:09:54.858287 kubelet[2597]: E1108 00:09:54.858238 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:54.858675 kubelet[2597]: E1108 00:09:54.858290 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:09:54.858675 kubelet[2597]: E1108 00:09:54.858443 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:54.859657 kubelet[2597]: E1108 00:09:54.859617 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:09:55.495959 containerd[1475]: time="2025-11-08T00:09:55.495322681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:09:55.853800 containerd[1475]: time="2025-11-08T00:09:55.853745292Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:55.855317 containerd[1475]: time="2025-11-08T00:09:55.855221951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:09:55.855486 containerd[1475]: time="2025-11-08T00:09:55.855251911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:55.855780 kubelet[2597]: E1108 00:09:55.855695 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:55.855780 kubelet[2597]: E1108 00:09:55.855759 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:09:55.856524 kubelet[2597]: E1108 00:09:55.856439 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hfm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:55.857756 kubelet[2597]: E1108 00:09:55.857683 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:09:57.495475 containerd[1475]: time="2025-11-08T00:09:57.495058631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:09:57.838772 containerd[1475]: time="2025-11-08T00:09:57.838399189Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:57.839989 containerd[1475]: time="2025-11-08T00:09:57.839939928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:09:57.840094 containerd[1475]: time="2025-11-08T00:09:57.840047809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:09:57.841362 kubelet[2597]: E1108 00:09:57.841309 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:57.841677 kubelet[2597]: E1108 00:09:57.841410 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:09:57.841721 kubelet[2597]: E1108 00:09:57.841644 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:57.843155 containerd[1475]: time="2025-11-08T00:09:57.842225315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:09:57.843771 kubelet[2597]: E1108 00:09:57.843729 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:09:58.196347 containerd[1475]: time="2025-11-08T00:09:58.195717055Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:58.197583 containerd[1475]: time="2025-11-08T00:09:58.197367914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:09:58.197583 containerd[1475]: time="2025-11-08T00:09:58.197481996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:09:58.197714 kubelet[2597]: E1108 00:09:58.197672 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:58.197769 kubelet[2597]: E1108 00:09:58.197730 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:09:58.198246 kubelet[2597]: E1108 00:09:58.197852 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:58.201526 containerd[1475]: time="2025-11-08T00:09:58.201474442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:09:58.556726 containerd[1475]: time="2025-11-08T00:09:58.556177307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:09:58.558108 containerd[1475]: time="2025-11-08T00:09:58.557720605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:09:58.558254 containerd[1475]: time="2025-11-08T00:09:58.558154970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:09:58.558554 kubelet[2597]: E1108 00:09:58.558455 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:58.558554 kubelet[2597]: E1108 00:09:58.558519 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:09:58.558720 kubelet[2597]: E1108 00:09:58.558626 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:09:58.559869 kubelet[2597]: E1108 00:09:58.559815 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:10:01.497598 kubelet[2597]: E1108 00:10:01.497546 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:10:05.495184 kubelet[2597]: E1108 00:10:05.493938 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:10:07.499157 kubelet[2597]: E1108 00:10:07.498312 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:10:07.499157 kubelet[2597]: E1108 00:10:07.498697 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:10:11.494506 kubelet[2597]: E1108 00:10:11.494362 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:10:12.492033 kubelet[2597]: E1108 00:10:12.491965 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:10:16.493961 containerd[1475]: time="2025-11-08T00:10:16.493901756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:10:16.834166 containerd[1475]: time="2025-11-08T00:10:16.833332989Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:16.835274 containerd[1475]: time="2025-11-08T00:10:16.835070802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:10:16.835274 containerd[1475]: time="2025-11-08T00:10:16.835229683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:10:16.835821 kubelet[2597]: E1108 00:10:16.835718 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:10:16.835821 kubelet[2597]: E1108 00:10:16.835776 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:10:16.837858 kubelet[2597]: E1108 00:10:16.835940 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fef3b1f193134241bf57d7645ef8585e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:16.838010 containerd[1475]: time="2025-11-08T00:10:16.837933184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:10:17.164613 containerd[1475]: time="2025-11-08T00:10:17.164502012Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:17.166414 containerd[1475]: time="2025-11-08T00:10:17.166140024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:10:17.166414 containerd[1475]: time="2025-11-08T00:10:17.166270785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:17.167036 kubelet[2597]: E1108 00:10:17.166831 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:10:17.167036 kubelet[2597]: E1108 00:10:17.166939 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:10:17.167954 kubelet[2597]: E1108 00:10:17.167832 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:17.169092 kubelet[2597]: E1108 00:10:17.169039 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:10:17.496823 containerd[1475]: time="2025-11-08T00:10:17.496159410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:10:17.834183 containerd[1475]: time="2025-11-08T00:10:17.833800692Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:17.835483 containerd[1475]: time="2025-11-08T00:10:17.835311983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:10:17.835483 containerd[1475]: time="2025-11-08T00:10:17.835331143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:10:17.835910 kubelet[2597]: E1108 00:10:17.835625 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:17.835910 kubelet[2597]: E1108 00:10:17.835680 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:10:17.835910 kubelet[2597]: E1108 00:10:17.835805 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:17.836991 kubelet[2597]: E1108 00:10:17.836949 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:10:20.494319 containerd[1475]: time="2025-11-08T00:10:20.494273410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:20.833744 containerd[1475]: time="2025-11-08T00:10:20.833035180Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:20.834765 containerd[1475]: time="2025-11-08T00:10:20.834618630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:20.835266 containerd[1475]: time="2025-11-08T00:10:20.834712511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:20.835442 kubelet[2597]: E1108 00:10:20.835291 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:20.835442 kubelet[2597]: E1108 00:10:20.835350 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:20.836879 kubelet[2597]: E1108 00:10:20.835507 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:20.836879 kubelet[2597]: E1108 00:10:20.836824 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:10:21.497642 containerd[1475]: time="2025-11-08T00:10:21.497538475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:10:21.861548 containerd[1475]: time="2025-11-08T00:10:21.861319882Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:21.862886 containerd[1475]: time="2025-11-08T00:10:21.862749252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:10:21.862886 containerd[1475]: time="2025-11-08T00:10:21.862852852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:21.863054 kubelet[2597]: E1108 00:10:21.862997 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:21.863054 kubelet[2597]: E1108 00:10:21.863045 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:10:21.863392 kubelet[2597]: E1108 00:10:21.863178 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hfm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:21.864901 kubelet[2597]: E1108 00:10:21.864674 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:10:24.492555 containerd[1475]: time="2025-11-08T00:10:24.492506189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:10:24.833242 containerd[1475]: time="2025-11-08T00:10:24.832499734Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:24.835660 containerd[1475]: time="2025-11-08T00:10:24.835488913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:10:24.835660 containerd[1475]: time="2025-11-08T00:10:24.835613074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:10:24.835925 kubelet[2597]: E1108 00:10:24.835864 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:24.836257 kubelet[2597]: E1108 00:10:24.835949 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:10:24.836257 kubelet[2597]: E1108 00:10:24.836099 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:24.838896 containerd[1475]: time="2025-11-08T00:10:24.838838134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:10:25.182906 containerd[1475]: time="2025-11-08T00:10:25.182830280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:25.184314 containerd[1475]: time="2025-11-08T00:10:25.184224289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:10:25.184480 containerd[1475]: time="2025-11-08T00:10:25.184285249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:10:25.185331 kubelet[2597]: E1108 00:10:25.185290 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:25.185444 kubelet[2597]: E1108 00:10:25.185342 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:10:25.185504 kubelet[2597]: E1108 00:10:25.185467 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:25.187269 kubelet[2597]: E1108 00:10:25.186873 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:10:26.493249 containerd[1475]: time="2025-11-08T00:10:26.492548389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:10:26.821289 containerd[1475]: time="2025-11-08T00:10:26.821142177Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:10:26.822700 containerd[1475]: time="2025-11-08T00:10:26.822659986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:10:26.822802 containerd[1475]: time="2025-11-08T00:10:26.822762947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:10:26.824952 kubelet[2597]: E1108 00:10:26.823071 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:10:26.824952 kubelet[2597]: E1108 00:10:26.823153 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:10:26.824952 kubelet[2597]: E1108 00:10:26.823351 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:10:26.825907 kubelet[2597]: E1108 00:10:26.825821 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:10:28.495165 kubelet[2597]: E1108 00:10:28.495107 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:10:29.496938 kubelet[2597]: E1108 00:10:29.496736 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:10:34.493869 kubelet[2597]: E1108 00:10:34.493458 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:10:34.538812 systemd[1]: run-containerd-runc-k8s.io-5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f-runc.YPAIlv.mount: Deactivated successfully. Nov 8 00:10:36.494217 kubelet[2597]: E1108 00:10:36.494164 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:10:37.496338 kubelet[2597]: E1108 00:10:37.495520 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:10:38.493474 kubelet[2597]: E1108 00:10:38.493198 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:10:42.493025 kubelet[2597]: E1108 00:10:42.492951 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:10:44.493176 kubelet[2597]: E1108 00:10:44.492023 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:10:47.497169 kubelet[2597]: E1108 00:10:47.497031 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:10:50.494246 kubelet[2597]: E1108 00:10:50.493692 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:10:50.495251 kubelet[2597]: E1108 00:10:50.494693 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:10:50.495251 kubelet[2597]: E1108 00:10:50.494849 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:10:56.492960 kubelet[2597]: E1108 00:10:56.492779 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:10:57.494014 kubelet[2597]: E1108 00:10:57.493193 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:11:01.491923 kubelet[2597]: E1108 00:11:01.491874 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:11:02.494957 containerd[1475]: time="2025-11-08T00:11:02.494866083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:11:02.496221 kubelet[2597]: E1108 00:11:02.495295 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:11:02.852879 containerd[1475]: time="2025-11-08T00:11:02.852809130Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:02.855348 containerd[1475]: time="2025-11-08T00:11:02.854725656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:11:02.855474 containerd[1475]: time="2025-11-08T00:11:02.854799297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:11:02.857503 kubelet[2597]: E1108 00:11:02.855700 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:11:02.857503 kubelet[2597]: E1108 00:11:02.855772 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:11:02.857503 kubelet[2597]: E1108 00:11:02.855934 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hfm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:02.857980 kubelet[2597]: E1108 00:11:02.857939 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:11:03.497641 containerd[1475]: time="2025-11-08T00:11:03.497602965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:11:03.846344 containerd[1475]: time="2025-11-08T00:11:03.845434884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:03.847340 containerd[1475]: time="2025-11-08T00:11:03.847196690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:11:03.847340 containerd[1475]: time="2025-11-08T00:11:03.847309650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:11:03.847846 kubelet[2597]: E1108 00:11:03.847691 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:11:03.847846 kubelet[2597]: E1108 00:11:03.847770 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:11:03.849431 kubelet[2597]: E1108 00:11:03.849331 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:03.850583 kubelet[2597]: E1108 00:11:03.850534 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:11:04.565749 systemd[1]: run-containerd-runc-k8s.io-5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f-runc.uqoDPr.mount: Deactivated successfully. Nov 8 00:11:07.495123 containerd[1475]: time="2025-11-08T00:11:07.494870952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:11:07.831920 containerd[1475]: time="2025-11-08T00:11:07.831736506Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:07.834144 containerd[1475]: time="2025-11-08T00:11:07.834039153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:11:07.834278 containerd[1475]: time="2025-11-08T00:11:07.834206434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:11:07.835185 kubelet[2597]: E1108 00:11:07.834460 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:11:07.835185 kubelet[2597]: E1108 00:11:07.834540 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:11:07.835185 kubelet[2597]: E1108 00:11:07.834673 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fef3b1f193134241bf57d7645ef8585e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:07.836687 containerd[1475]: time="2025-11-08T00:11:07.836605961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:11:08.178074 containerd[1475]: time="2025-11-08T00:11:08.177999044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:08.180017 containerd[1475]: time="2025-11-08T00:11:08.179843370Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:11:08.180176 containerd[1475]: time="2025-11-08T00:11:08.179927410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:11:08.180463 kubelet[2597]: E1108 00:11:08.180379 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:11:08.180541 kubelet[2597]: E1108 00:11:08.180466 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:11:08.180770 kubelet[2597]: E1108 00:11:08.180670 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:08.182306 kubelet[2597]: E1108 00:11:08.182255 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:11:09.492971 containerd[1475]: time="2025-11-08T00:11:09.492823215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:11:09.829069 containerd[1475]: time="2025-11-08T00:11:09.828305783Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:09.829946 containerd[1475]: time="2025-11-08T00:11:09.829812827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:11:09.829946 containerd[1475]: time="2025-11-08T00:11:09.829910588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:11:09.831391 kubelet[2597]: E1108 00:11:09.831349 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:11:09.831743 kubelet[2597]: E1108 00:11:09.831399 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:11:09.831743 kubelet[2597]: E1108 00:11:09.831528 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:09.833028 kubelet[2597]: E1108 00:11:09.832967 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:11:13.496504 containerd[1475]: time="2025-11-08T00:11:13.496429631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:11:13.848100 containerd[1475]: time="2025-11-08T00:11:13.847937006Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:13.849605 containerd[1475]: time="2025-11-08T00:11:13.849531371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:11:13.849755 containerd[1475]: time="2025-11-08T00:11:13.849641211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:11:13.850036 kubelet[2597]: E1108 00:11:13.849972 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:11:13.850036 kubelet[2597]: E1108 00:11:13.850031 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:11:13.850565 kubelet[2597]: E1108 00:11:13.850151 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:13.852178 containerd[1475]: time="2025-11-08T00:11:13.852124779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:11:14.222392 containerd[1475]: time="2025-11-08T00:11:14.222304644Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:14.224123 containerd[1475]: time="2025-11-08T00:11:14.224021129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:11:14.224281 containerd[1475]: time="2025-11-08T00:11:14.224219050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:11:14.224578 kubelet[2597]: E1108 00:11:14.224432 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:11:14.225197 kubelet[2597]: E1108 00:11:14.224576 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:11:14.225197 kubelet[2597]: E1108 00:11:14.224805 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:14.226375 kubelet[2597]: E1108 00:11:14.226305 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:11:14.493841 containerd[1475]: time="2025-11-08T00:11:14.493699451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:11:14.848396 containerd[1475]: time="2025-11-08T00:11:14.848113306Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:11:14.849972 containerd[1475]: time="2025-11-08T00:11:14.849873671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:11:14.849972 containerd[1475]: time="2025-11-08T00:11:14.849952431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:11:14.851850 kubelet[2597]: E1108 00:11:14.850195 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:11:14.851850 kubelet[2597]: E1108 00:11:14.850251 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:11:14.851850 kubelet[2597]: E1108 00:11:14.850373 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:11:14.852437 kubelet[2597]: E1108 00:11:14.852398 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:11:15.496178 kubelet[2597]: E1108 00:11:15.495620 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:11:15.498107 kubelet[2597]: E1108 00:11:15.498028 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:11:20.493744 kubelet[2597]: E1108 00:11:20.493549 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:11:20.496749 kubelet[2597]: E1108 00:11:20.496337 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:11:24.393692 systemd[1]: Started sshd@8-46.224.11.50:22-139.178.68.195:34940.service - OpenSSH per-connection server daemon (139.178.68.195:34940). Nov 8 00:11:25.342341 sshd[5375]: Accepted publickey for core from 139.178.68.195 port 34940 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:25.346101 sshd[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:25.352894 systemd-logind[1459]: New session 8 of user core. Nov 8 00:11:25.361353 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:11:26.088385 sshd[5375]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:26.093972 systemd[1]: sshd@8-46.224.11.50:22-139.178.68.195:34940.service: Deactivated successfully. Nov 8 00:11:26.096586 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:11:26.099251 systemd-logind[1459]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:11:26.102040 systemd-logind[1459]: Removed session 8. Nov 8 00:11:26.493384 kubelet[2597]: E1108 00:11:26.492700 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:11:26.493384 kubelet[2597]: E1108 00:11:26.493251 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:11:28.495201 kubelet[2597]: E1108 00:11:28.495124 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:11:28.495630 kubelet[2597]: E1108 00:11:28.495236 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:11:31.261098 systemd[1]: Started sshd@9-46.224.11.50:22-139.178.68.195:34956.service - OpenSSH per-connection server daemon (139.178.68.195:34956). Nov 8 00:11:31.496198 kubelet[2597]: E1108 00:11:31.495292 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:11:32.217942 sshd[5389]: Accepted publickey for core from 139.178.68.195 port 34956 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:32.219688 sshd[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:32.225030 systemd-logind[1459]: New session 9 of user core. Nov 8 00:11:32.236500 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:11:32.964078 sshd[5389]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:32.970400 systemd-logind[1459]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:11:32.970980 systemd[1]: sshd@9-46.224.11.50:22-139.178.68.195:34956.service: Deactivated successfully. Nov 8 00:11:32.974202 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:11:32.975541 systemd-logind[1459]: Removed session 9. Nov 8 00:11:35.500309 kubelet[2597]: E1108 00:11:35.500231 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:11:37.496947 kubelet[2597]: E1108 00:11:37.496288 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:11:38.127864 systemd[1]: Started sshd@10-46.224.11.50:22-139.178.68.195:39918.service - OpenSSH per-connection server daemon (139.178.68.195:39918). Nov 8 00:11:38.494217 kubelet[2597]: E1108 00:11:38.493764 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:11:39.086221 sshd[5424]: Accepted publickey for core from 139.178.68.195 port 39918 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:39.087541 sshd[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:39.097210 systemd-logind[1459]: New session 10 of user core. Nov 8 00:11:39.102402 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:11:39.493967 kubelet[2597]: E1108 00:11:39.493915 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:11:39.838827 sshd[5424]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:39.844769 systemd-logind[1459]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:11:39.845352 systemd[1]: sshd@10-46.224.11.50:22-139.178.68.195:39918.service: Deactivated successfully. Nov 8 00:11:39.849566 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:11:39.852233 systemd-logind[1459]: Removed session 10. Nov 8 00:11:40.010546 systemd[1]: Started sshd@11-46.224.11.50:22-139.178.68.195:39930.service - OpenSSH per-connection server daemon (139.178.68.195:39930). Nov 8 00:11:40.494626 kubelet[2597]: E1108 00:11:40.494467 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:11:40.953784 sshd[5442]: Accepted publickey for core from 139.178.68.195 port 39930 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:40.955590 sshd[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:40.961977 systemd-logind[1459]: New session 11 of user core. Nov 8 00:11:40.967460 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:11:41.778431 sshd[5442]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:41.783401 systemd-logind[1459]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:11:41.783610 systemd[1]: sshd@11-46.224.11.50:22-139.178.68.195:39930.service: Deactivated successfully. Nov 8 00:11:41.789033 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:11:41.794864 systemd-logind[1459]: Removed session 11. Nov 8 00:11:41.951245 systemd[1]: Started sshd@12-46.224.11.50:22-139.178.68.195:39934.service - OpenSSH per-connection server daemon (139.178.68.195:39934). Nov 8 00:11:42.899445 sshd[5453]: Accepted publickey for core from 139.178.68.195 port 39934 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:42.901698 sshd[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:42.908406 systemd-logind[1459]: New session 12 of user core. Nov 8 00:11:42.915792 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:11:43.666561 sshd[5453]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:43.672307 systemd-logind[1459]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:11:43.672995 systemd[1]: sshd@12-46.224.11.50:22-139.178.68.195:39934.service: Deactivated successfully. Nov 8 00:11:43.677285 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:11:43.679175 systemd-logind[1459]: Removed session 12. Nov 8 00:11:44.495840 kubelet[2597]: E1108 00:11:44.495721 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:11:46.493806 kubelet[2597]: E1108 00:11:46.493627 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:11:48.836227 systemd[1]: Started sshd@13-46.224.11.50:22-139.178.68.195:42496.service - OpenSSH per-connection server daemon (139.178.68.195:42496). Nov 8 00:11:49.783750 sshd[5468]: Accepted publickey for core from 139.178.68.195 port 42496 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:49.785809 sshd[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:49.793484 systemd-logind[1459]: New session 13 of user core. Nov 8 00:11:49.800336 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:11:50.550579 sshd[5468]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:50.556857 systemd[1]: sshd@13-46.224.11.50:22-139.178.68.195:42496.service: Deactivated successfully. Nov 8 00:11:50.560066 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:11:50.560988 systemd-logind[1459]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:11:50.563742 systemd-logind[1459]: Removed session 13. Nov 8 00:11:50.718542 systemd[1]: Started sshd@14-46.224.11.50:22-139.178.68.195:42504.service - OpenSSH per-connection server daemon (139.178.68.195:42504). Nov 8 00:11:51.493569 kubelet[2597]: E1108 00:11:51.492804 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:11:51.645263 sshd[5481]: Accepted publickey for core from 139.178.68.195 port 42504 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:51.649457 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:51.659163 systemd-logind[1459]: New session 14 of user core. Nov 8 00:11:51.663331 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:11:52.492739 kubelet[2597]: E1108 00:11:52.492340 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:11:52.505895 sshd[5481]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:52.511325 systemd-logind[1459]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:11:52.511575 systemd[1]: sshd@14-46.224.11.50:22-139.178.68.195:42504.service: Deactivated successfully. Nov 8 00:11:52.517746 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:11:52.522017 systemd-logind[1459]: Removed session 14. Nov 8 00:11:52.669546 systemd[1]: Started sshd@15-46.224.11.50:22-139.178.68.195:42512.service - OpenSSH per-connection server daemon (139.178.68.195:42512). Nov 8 00:11:53.494759 kubelet[2597]: E1108 00:11:53.493773 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:11:53.495494 kubelet[2597]: E1108 00:11:53.494944 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:11:53.609117 sshd[5492]: Accepted publickey for core from 139.178.68.195 port 42512 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:53.611645 sshd[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:53.617638 systemd-logind[1459]: New session 15 of user core. Nov 8 00:11:53.625581 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:11:55.040017 sshd[5492]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:55.045099 systemd[1]: sshd@15-46.224.11.50:22-139.178.68.195:42512.service: Deactivated successfully. Nov 8 00:11:55.047954 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:11:55.051926 systemd-logind[1459]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:11:55.053459 systemd-logind[1459]: Removed session 15. Nov 8 00:11:55.205535 systemd[1]: Started sshd@16-46.224.11.50:22-139.178.68.195:48488.service - OpenSSH per-connection server daemon (139.178.68.195:48488). Nov 8 00:11:55.494317 kubelet[2597]: E1108 00:11:55.494269 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:11:56.147726 sshd[5514]: Accepted publickey for core from 139.178.68.195 port 48488 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:56.151115 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:56.159014 systemd-logind[1459]: New session 16 of user core. Nov 8 00:11:56.161352 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:11:57.046610 sshd[5514]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:57.054723 systemd-logind[1459]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:11:57.055034 systemd[1]: sshd@16-46.224.11.50:22-139.178.68.195:48488.service: Deactivated successfully. Nov 8 00:11:57.058293 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:11:57.060652 systemd-logind[1459]: Removed session 16. Nov 8 00:11:57.214429 systemd[1]: Started sshd@17-46.224.11.50:22-139.178.68.195:48496.service - OpenSSH per-connection server daemon (139.178.68.195:48496). Nov 8 00:11:58.148447 sshd[5525]: Accepted publickey for core from 139.178.68.195 port 48496 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:11:58.151002 sshd[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:11:58.163739 systemd-logind[1459]: New session 17 of user core. Nov 8 00:11:58.171377 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:11:58.912833 sshd[5525]: pam_unix(sshd:session): session closed for user core Nov 8 00:11:58.917655 systemd[1]: sshd@17-46.224.11.50:22-139.178.68.195:48496.service: Deactivated successfully. Nov 8 00:11:58.921232 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:11:58.922604 systemd-logind[1459]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:11:58.924482 systemd-logind[1459]: Removed session 17. Nov 8 00:12:00.493779 kubelet[2597]: E1108 00:12:00.493596 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:12:04.077520 systemd[1]: Started sshd@18-46.224.11.50:22-139.178.68.195:39284.service - OpenSSH per-connection server daemon (139.178.68.195:39284). Nov 8 00:12:04.534479 systemd[1]: run-containerd-runc-k8s.io-5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f-runc.k0cBZx.mount: Deactivated successfully. Nov 8 00:12:05.036536 sshd[5540]: Accepted publickey for core from 139.178.68.195 port 39284 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:12:05.040123 sshd[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:12:05.047248 systemd-logind[1459]: New session 18 of user core. Nov 8 00:12:05.048317 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:12:05.496228 kubelet[2597]: E1108 00:12:05.494584 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:12:05.498564 kubelet[2597]: E1108 00:12:05.498183 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:12:05.803423 sshd[5540]: pam_unix(sshd:session): session closed for user core Nov 8 00:12:05.808005 systemd-logind[1459]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:12:05.810202 systemd[1]: sshd@18-46.224.11.50:22-139.178.68.195:39284.service: Deactivated successfully. Nov 8 00:12:05.814418 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:12:05.817181 systemd-logind[1459]: Removed session 18. Nov 8 00:12:06.492673 kubelet[2597]: E1108 00:12:06.491985 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:12:06.492673 kubelet[2597]: E1108 00:12:06.492405 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:12:07.494705 kubelet[2597]: E1108 00:12:07.494609 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:12:10.971265 systemd[1]: Started sshd@19-46.224.11.50:22-139.178.68.195:39290.service - OpenSSH per-connection server daemon (139.178.68.195:39290). Nov 8 00:12:11.898747 sshd[5575]: Accepted publickey for core from 139.178.68.195 port 39290 ssh2: RSA SHA256:X94JdbmwZuMCIFktH68nC0dEPfkNdvpvsOPYmCafBEM Nov 8 00:12:11.901025 sshd[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:12:11.907249 systemd-logind[1459]: New session 19 of user core. Nov 8 00:12:11.910943 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:12:12.165839 systemd[1]: Started sshd@20-46.224.11.50:22-92.118.39.76:54882.service - OpenSSH per-connection server daemon (92.118.39.76:54882). Nov 8 00:12:12.216387 sshd[5585]: Connection closed by 92.118.39.76 port 54882 Nov 8 00:12:12.217905 systemd[1]: sshd@20-46.224.11.50:22-92.118.39.76:54882.service: Deactivated successfully. Nov 8 00:12:12.660768 sshd[5575]: pam_unix(sshd:session): session closed for user core Nov 8 00:12:12.667596 systemd[1]: sshd@19-46.224.11.50:22-139.178.68.195:39290.service: Deactivated successfully. Nov 8 00:12:12.672430 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:12:12.673719 systemd-logind[1459]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:12:12.675040 systemd-logind[1459]: Removed session 19. Nov 8 00:12:14.493649 kubelet[2597]: E1108 00:12:14.493601 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:12:17.493025 kubelet[2597]: E1108 00:12:17.492644 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:12:18.492605 kubelet[2597]: E1108 00:12:18.492203 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:12:19.494397 kubelet[2597]: E1108 00:12:19.493889 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:12:20.495960 kubelet[2597]: E1108 00:12:20.494045 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:12:20.495960 kubelet[2597]: E1108 00:12:20.494779 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:12:21.691708 systemd[1]: Started sshd@21-46.224.11.50:22-198.235.24.82:54364.service - OpenSSH per-connection server daemon (198.235.24.82:54364). Nov 8 00:12:22.003066 sshd[5599]: Connection closed by 198.235.24.82 port 54364 Nov 8 00:12:22.005510 systemd[1]: sshd@21-46.224.11.50:22-198.235.24.82:54364.service: Deactivated successfully. Nov 8 00:12:25.494901 kubelet[2597]: E1108 00:12:25.494833 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:12:29.493657 containerd[1475]: time="2025-11-08T00:12:29.492941377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:12:29.840424 containerd[1475]: time="2025-11-08T00:12:29.839433888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:29.841911 containerd[1475]: time="2025-11-08T00:12:29.841710745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:12:29.841911 containerd[1475]: time="2025-11-08T00:12:29.841787546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:12:29.842346 kubelet[2597]: E1108 00:12:29.841999 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:29.842346 kubelet[2597]: E1108 00:12:29.842070 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:29.844254 kubelet[2597]: E1108 00:12:29.842297 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5hfm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-6tnl5_calico-apiserver(bc4e8de7-6ddd-43cb-ba37-33083ff72076): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:29.844254 kubelet[2597]: E1108 00:12:29.843863 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:12:31.496962 kubelet[2597]: E1108 00:12:31.496905 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:12:31.500448 kubelet[2597]: E1108 00:12:31.498921 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:12:32.492807 containerd[1475]: time="2025-11-08T00:12:32.492682854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:12:32.848997 containerd[1475]: time="2025-11-08T00:12:32.848817447Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:32.852614 containerd[1475]: time="2025-11-08T00:12:32.852456113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:12:32.852614 containerd[1475]: time="2025-11-08T00:12:32.852552594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:12:32.853296 kubelet[2597]: E1108 00:12:32.853185 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:32.853296 kubelet[2597]: E1108 00:12:32.853257 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:12:32.854926 kubelet[2597]: E1108 00:12:32.854451 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x2v9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b79fdfd8b-4hxxc_calico-apiserver(cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:32.855839 kubelet[2597]: E1108 00:12:32.855741 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:12:34.493665 containerd[1475]: time="2025-11-08T00:12:34.493615153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:12:34.531893 systemd[1]: run-containerd-runc-k8s.io-5e5af4384c6525ff029aa95e7f4845b9312d29b6e0bc3e46ba734f9b281f391f-runc.U4LjdB.mount: Deactivated successfully. Nov 8 00:12:34.844753 containerd[1475]: time="2025-11-08T00:12:34.844576945Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:34.846354 containerd[1475]: time="2025-11-08T00:12:34.846288957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:12:34.846560 containerd[1475]: time="2025-11-08T00:12:34.846297157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:12:34.846632 kubelet[2597]: E1108 00:12:34.846548 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:12:34.846632 kubelet[2597]: E1108 00:12:34.846600 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:12:34.847057 kubelet[2597]: E1108 00:12:34.846718 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:fef3b1f193134241bf57d7645ef8585e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:34.848907 containerd[1475]: time="2025-11-08T00:12:34.848836695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:12:35.203755 containerd[1475]: time="2025-11-08T00:12:35.203650542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:35.205578 containerd[1475]: time="2025-11-08T00:12:35.205445875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:12:35.205699 containerd[1475]: time="2025-11-08T00:12:35.205632156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:12:35.206047 kubelet[2597]: E1108 00:12:35.205921 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:12:35.206047 kubelet[2597]: E1108 00:12:35.206001 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:12:35.206380 kubelet[2597]: E1108 00:12:35.206200 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcwhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-74965ffc78-dt582_calico-system(2215902e-6443-46b1-ae69-123ea2434f7b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:35.207573 kubelet[2597]: E1108 00:12:35.207466 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:12:38.493699 containerd[1475]: time="2025-11-08T00:12:38.493639966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:12:38.841951 containerd[1475]: time="2025-11-08T00:12:38.841771855Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:38.843664 containerd[1475]: time="2025-11-08T00:12:38.843604268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:12:38.843847 containerd[1475]: time="2025-11-08T00:12:38.843718668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:12:38.843948 kubelet[2597]: E1108 00:12:38.843874 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:12:38.843948 kubelet[2597]: E1108 00:12:38.843929 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:12:38.844480 kubelet[2597]: E1108 00:12:38.844061 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9gf9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c4f66b45d-rgxhv_calico-system(d5333c3e-0b85-42ef-9987-96412959a46c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:38.845227 kubelet[2597]: E1108 00:12:38.845191 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7c4f66b45d-rgxhv" podUID="d5333c3e-0b85-42ef-9987-96412959a46c" Nov 8 00:12:42.493011 kubelet[2597]: E1108 00:12:42.492918 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-6tnl5" podUID="bc4e8de7-6ddd-43cb-ba37-33083ff72076" Nov 8 00:12:43.493952 containerd[1475]: time="2025-11-08T00:12:43.493586165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:12:43.855018 containerd[1475]: time="2025-11-08T00:12:43.854886803Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:43.856591 containerd[1475]: time="2025-11-08T00:12:43.856449813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:12:43.856591 containerd[1475]: time="2025-11-08T00:12:43.856514014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:12:43.856934 kubelet[2597]: E1108 00:12:43.856773 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:12:43.856934 kubelet[2597]: E1108 00:12:43.856832 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:12:43.857611 kubelet[2597]: E1108 00:12:43.856992 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:43.859514 containerd[1475]: time="2025-11-08T00:12:43.859435193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:12:44.197169 containerd[1475]: time="2025-11-08T00:12:44.196937105Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:44.199171 containerd[1475]: time="2025-11-08T00:12:44.198990118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:12:44.199171 containerd[1475]: time="2025-11-08T00:12:44.199092439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:12:44.199381 kubelet[2597]: E1108 00:12:44.199228 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:12:44.199381 kubelet[2597]: E1108 00:12:44.199277 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:12:44.199653 kubelet[2597]: E1108 00:12:44.199392 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gfc2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-l4z57_calico-system(8da5e4ab-3d6f-46a3-91d8-e794f2481a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:44.200614 kubelet[2597]: E1108 00:12:44.200554 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-l4z57" podUID="8da5e4ab-3d6f-46a3-91d8-e794f2481a0b" Nov 8 00:12:44.494271 containerd[1475]: time="2025-11-08T00:12:44.494081348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:12:44.586605 systemd[1]: cri-containerd-c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364.scope: Deactivated successfully. Nov 8 00:12:44.586858 systemd[1]: cri-containerd-c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364.scope: Consumed 47.678s CPU time. Nov 8 00:12:44.615035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364-rootfs.mount: Deactivated successfully. Nov 8 00:12:44.621608 containerd[1475]: time="2025-11-08T00:12:44.621533733Z" level=info msg="shim disconnected" id=c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364 namespace=k8s.io Nov 8 00:12:44.622108 containerd[1475]: time="2025-11-08T00:12:44.621882616Z" level=warning msg="cleaning up after shim disconnected" id=c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364 namespace=k8s.io Nov 8 00:12:44.622108 containerd[1475]: time="2025-11-08T00:12:44.621909376Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:12:44.838286 containerd[1475]: time="2025-11-08T00:12:44.838041295Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:12:44.840808 containerd[1475]: time="2025-11-08T00:12:44.840638632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:12:44.840808 containerd[1475]: time="2025-11-08T00:12:44.840698272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:12:44.841115 kubelet[2597]: E1108 00:12:44.840832 2597 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:12:44.841115 kubelet[2597]: E1108 00:12:44.840879 2597 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:12:44.841115 kubelet[2597]: E1108 00:12:44.841006 2597 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bb7sd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-c6ph6_calico-system(8c8208d0-d52c-4948-a7c4-1a012578a167): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:12:44.842224 kubelet[2597]: E1108 00:12:44.842164 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-c6ph6" podUID="8c8208d0-d52c-4948-a7c4-1a012578a167" Nov 8 00:12:44.869064 kubelet[2597]: E1108 00:12:44.866894 2597 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38208->10.0.0.2:2379: read: connection timed out" Nov 8 00:12:44.870234 systemd[1]: cri-containerd-cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d.scope: Deactivated successfully. Nov 8 00:12:44.870828 systemd[1]: cri-containerd-cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d.scope: Consumed 4.210s CPU time, 15.5M memory peak, 0B memory swap peak. Nov 8 00:12:44.888842 systemd[1]: cri-containerd-0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742.scope: Deactivated successfully. Nov 8 00:12:44.889405 systemd[1]: cri-containerd-0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742.scope: Consumed 4.923s CPU time, 19.5M memory peak, 0B memory swap peak. Nov 8 00:12:44.907317 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d-rootfs.mount: Deactivated successfully. Nov 8 00:12:44.915191 containerd[1475]: time="2025-11-08T00:12:44.915041513Z" level=info msg="shim disconnected" id=cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d namespace=k8s.io Nov 8 00:12:44.915589 containerd[1475]: time="2025-11-08T00:12:44.915441556Z" level=warning msg="cleaning up after shim disconnected" id=cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d namespace=k8s.io Nov 8 00:12:44.915589 containerd[1475]: time="2025-11-08T00:12:44.915519236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:12:44.926458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742-rootfs.mount: Deactivated successfully. Nov 8 00:12:44.929841 containerd[1475]: time="2025-11-08T00:12:44.929677568Z" level=info msg="shim disconnected" id=0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742 namespace=k8s.io Nov 8 00:12:44.930101 containerd[1475]: time="2025-11-08T00:12:44.929822369Z" level=warning msg="cleaning up after shim disconnected" id=0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742 namespace=k8s.io Nov 8 00:12:44.930101 containerd[1475]: time="2025-11-08T00:12:44.929995730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:12:44.937844 containerd[1475]: time="2025-11-08T00:12:44.937622299Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:12:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:12:45.389166 kubelet[2597]: I1108 00:12:45.388871 2597 scope.go:117] "RemoveContainer" containerID="0438f04a146df0fbbb5738b06f559ceb85ec474979a4d50cb01283448d016742" Nov 8 00:12:45.392023 kubelet[2597]: I1108 00:12:45.391999 2597 scope.go:117] "RemoveContainer" containerID="cd2b405e3cec11c266c3a62dc98e624ccbb2e55e6bb25ecbfee3687883277d3d" Nov 8 00:12:45.392619 containerd[1475]: time="2025-11-08T00:12:45.391996700Z" level=info msg="CreateContainer within sandbox \"5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 8 00:12:45.394836 kubelet[2597]: I1108 00:12:45.394813 2597 scope.go:117] "RemoveContainer" containerID="c1c8b5625406ef4cd700b86ac7882f4542f608c9086c4ddb6eabca361d7cc364" Nov 8 00:12:45.395917 containerd[1475]: time="2025-11-08T00:12:45.395885925Z" level=info msg="CreateContainer within sandbox \"fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 8 00:12:45.398322 containerd[1475]: time="2025-11-08T00:12:45.398291060Z" level=info msg="CreateContainer within sandbox \"47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 8 00:12:45.417566 containerd[1475]: time="2025-11-08T00:12:45.417479904Z" level=info msg="CreateContainer within sandbox \"5d2004c983537cda36c267e714d895f05c791f6ec4bc5a57032c2c59b6bf576e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4be72b5b2fd8fbda4c9f1b89a6e1d4c3b9a5693677851a23fd3db5ab98d1fd3f\"" Nov 8 00:12:45.419437 containerd[1475]: time="2025-11-08T00:12:45.418351229Z" level=info msg="StartContainer for \"4be72b5b2fd8fbda4c9f1b89a6e1d4c3b9a5693677851a23fd3db5ab98d1fd3f\"" Nov 8 00:12:45.419437 containerd[1475]: time="2025-11-08T00:12:45.419250275Z" level=info msg="CreateContainer within sandbox \"fa1c2ce5f64a3e77d0dbd9f7fe95d34fd276f437ca085ab36126948751facfda\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8eaf9828ae58a94996b62b55b426d389fd6a61eebe2bdcaf0f7ccef47243d7bb\"" Nov 8 00:12:45.420169 containerd[1475]: time="2025-11-08T00:12:45.420147321Z" level=info msg="StartContainer for \"8eaf9828ae58a94996b62b55b426d389fd6a61eebe2bdcaf0f7ccef47243d7bb\"" Nov 8 00:12:45.420605 containerd[1475]: time="2025-11-08T00:12:45.420572364Z" level=info msg="CreateContainer within sandbox \"47c665592cccd3359b48b02e625ec64baa35376b3699f91bda3200094bc25e0c\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b3dd7e3ac6d932b71a876c4070f2136527dfe7a5ac07a4c45ba503b2b607bcbb\"" Nov 8 00:12:45.422995 containerd[1475]: time="2025-11-08T00:12:45.422179014Z" level=info msg="StartContainer for \"b3dd7e3ac6d932b71a876c4070f2136527dfe7a5ac07a4c45ba503b2b607bcbb\"" Nov 8 00:12:45.455608 systemd[1]: Started cri-containerd-8eaf9828ae58a94996b62b55b426d389fd6a61eebe2bdcaf0f7ccef47243d7bb.scope - libcontainer container 8eaf9828ae58a94996b62b55b426d389fd6a61eebe2bdcaf0f7ccef47243d7bb. Nov 8 00:12:45.465308 systemd[1]: Started cri-containerd-4be72b5b2fd8fbda4c9f1b89a6e1d4c3b9a5693677851a23fd3db5ab98d1fd3f.scope - libcontainer container 4be72b5b2fd8fbda4c9f1b89a6e1d4c3b9a5693677851a23fd3db5ab98d1fd3f. Nov 8 00:12:45.466357 systemd[1]: Started cri-containerd-b3dd7e3ac6d932b71a876c4070f2136527dfe7a5ac07a4c45ba503b2b607bcbb.scope - libcontainer container b3dd7e3ac6d932b71a876c4070f2136527dfe7a5ac07a4c45ba503b2b607bcbb. Nov 8 00:12:45.506936 kubelet[2597]: E1108 00:12:45.506875 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74965ffc78-dt582" podUID="2215902e-6443-46b1-ae69-123ea2434f7b" Nov 8 00:12:45.520238 containerd[1475]: time="2025-11-08T00:12:45.519644080Z" level=info msg="StartContainer for \"8eaf9828ae58a94996b62b55b426d389fd6a61eebe2bdcaf0f7ccef47243d7bb\" returns successfully" Nov 8 00:12:45.532345 containerd[1475]: time="2025-11-08T00:12:45.532304361Z" level=info msg="StartContainer for \"b3dd7e3ac6d932b71a876c4070f2136527dfe7a5ac07a4c45ba503b2b607bcbb\" returns successfully" Nov 8 00:12:45.542098 containerd[1475]: time="2025-11-08T00:12:45.542045983Z" level=info msg="StartContainer for \"4be72b5b2fd8fbda4c9f1b89a6e1d4c3b9a5693677851a23fd3db5ab98d1fd3f\" returns successfully" Nov 8 00:12:46.491665 kubelet[2597]: E1108 00:12:46.491630 2597 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b79fdfd8b-4hxxc" podUID="cc3077c0-ce1f-45f6-b97f-fbfd9a4135ec" Nov 8 00:12:48.836889 kubelet[2597]: E1108 00:12:48.832950 2597 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38020->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4081-3-6-n-fb20dfd731.1875dfabdb8a9e89 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4081-3-6-n-fb20dfd731,UID:f61aadfc756ae34e9a5ec4d8082bca36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4081-3-6-n-fb20dfd731,},FirstTimestamp:2025-11-08 00:12:38.401113737 +0000 UTC m=+231.040192233,LastTimestamp:2025-11-08 00:12:38.401113737 +0000 UTC m=+231.040192233,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-6-n-fb20dfd731,}"