Nov 23 23:00:08.833230 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 23:00:08.833256 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:53:53 -00 2025 Nov 23 23:00:08.833266 kernel: KASLR enabled Nov 23 23:00:08.833272 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 23 23:00:08.833278 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Nov 23 23:00:08.833283 kernel: random: crng init done Nov 23 23:00:08.833875 kernel: secureboot: Secure boot disabled Nov 23 23:00:08.833886 kernel: ACPI: Early table checksum verification disabled Nov 23 23:00:08.833892 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 23 23:00:08.833899 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 23 23:00:08.833909 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833915 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833921 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833927 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833935 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833943 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833949 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833956 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833976 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 23:00:08.833984 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 23 23:00:08.833990 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 23 23:00:08.833997 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 23:00:08.834003 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 23:00:08.834010 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Nov 23 23:00:08.834016 kernel: Zone ranges: Nov 23 23:00:08.834023 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 23:00:08.834031 kernel: DMA32 empty Nov 23 23:00:08.834038 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 23 23:00:08.834044 kernel: Device empty Nov 23 23:00:08.834050 kernel: Movable zone start for each node Nov 23 23:00:08.834057 kernel: Early memory node ranges Nov 23 23:00:08.834063 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Nov 23 23:00:08.834069 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Nov 23 23:00:08.834075 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Nov 23 23:00:08.834082 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 23 23:00:08.834088 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 23 23:00:08.834094 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 23 23:00:08.834101 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 23 23:00:08.834109 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 23 23:00:08.834115 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 23 23:00:08.834125 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 23:00:08.834132 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 23 23:00:08.834139 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Nov 23 23:00:08.834148 kernel: psci: probing for conduit method from ACPI. Nov 23 23:00:08.834154 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 23:00:08.834162 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 23:00:08.834169 kernel: psci: Trusted OS migration not required Nov 23 23:00:08.834176 kernel: psci: SMC Calling Convention v1.1 Nov 23 23:00:08.834183 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 23:00:08.834190 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 23:00:08.834198 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 23:00:08.834205 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 23:00:08.834212 kernel: Detected PIPT I-cache on CPU0 Nov 23 23:00:08.834219 kernel: CPU features: detected: GIC system register CPU interface Nov 23 23:00:08.834227 kernel: CPU features: detected: Spectre-v4 Nov 23 23:00:08.834233 kernel: CPU features: detected: Spectre-BHB Nov 23 23:00:08.834240 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 23:00:08.834247 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 23:00:08.834254 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 23:00:08.834261 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 23:00:08.834268 kernel: alternatives: applying boot alternatives Nov 23 23:00:08.834277 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:00:08.834284 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 23:00:08.834324 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 23:00:08.834334 kernel: Fallback order for Node 0: 0 Nov 23 23:00:08.834344 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Nov 23 23:00:08.834351 kernel: Policy zone: Normal Nov 23 23:00:08.834358 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 23:00:08.834364 kernel: software IO TLB: area num 2. Nov 23 23:00:08.834371 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Nov 23 23:00:08.834378 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 23:00:08.834385 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 23:00:08.834393 kernel: rcu: RCU event tracing is enabled. Nov 23 23:00:08.834399 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 23:00:08.834406 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 23:00:08.834413 kernel: Tracing variant of Tasks RCU enabled. Nov 23 23:00:08.834420 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 23:00:08.834428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 23:00:08.834435 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:00:08.834442 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 23:00:08.834449 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 23:00:08.834457 kernel: GICv3: 256 SPIs implemented Nov 23 23:00:08.834464 kernel: GICv3: 0 Extended SPIs implemented Nov 23 23:00:08.834470 kernel: Root IRQ handler: gic_handle_irq Nov 23 23:00:08.834477 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 23:00:08.834484 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 23:00:08.834490 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 23:00:08.834497 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 23:00:08.834505 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Nov 23 23:00:08.834513 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Nov 23 23:00:08.834520 kernel: GICv3: using LPI property table @0x0000000100120000 Nov 23 23:00:08.834527 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Nov 23 23:00:08.834533 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 23:00:08.834540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:00:08.834547 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 23:00:08.834554 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 23:00:08.834561 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 23:00:08.834568 kernel: Console: colour dummy device 80x25 Nov 23 23:00:08.834576 kernel: ACPI: Core revision 20240827 Nov 23 23:00:08.834585 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 23:00:08.834592 kernel: pid_max: default: 32768 minimum: 301 Nov 23 23:00:08.834599 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 23:00:08.834607 kernel: landlock: Up and running. Nov 23 23:00:08.834614 kernel: SELinux: Initializing. Nov 23 23:00:08.834622 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:00:08.834629 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 23:00:08.834637 kernel: rcu: Hierarchical SRCU implementation. Nov 23 23:00:08.834644 kernel: rcu: Max phase no-delay instances is 400. Nov 23 23:00:08.834653 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 23:00:08.834661 kernel: Remapping and enabling EFI services. Nov 23 23:00:08.834668 kernel: smp: Bringing up secondary CPUs ... Nov 23 23:00:08.834675 kernel: Detected PIPT I-cache on CPU1 Nov 23 23:00:08.834683 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 23:00:08.834690 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Nov 23 23:00:08.834697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 23:00:08.834704 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 23:00:08.834711 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 23:00:08.834720 kernel: SMP: Total of 2 processors activated. Nov 23 23:00:08.834732 kernel: CPU: All CPU(s) started at EL1 Nov 23 23:00:08.834739 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 23:00:08.834748 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 23:00:08.834756 kernel: CPU features: detected: Common not Private translations Nov 23 23:00:08.834763 kernel: CPU features: detected: CRC32 instructions Nov 23 23:00:08.834771 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 23:00:08.834779 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 23:00:08.834788 kernel: CPU features: detected: LSE atomic instructions Nov 23 23:00:08.834796 kernel: CPU features: detected: Privileged Access Never Nov 23 23:00:08.834803 kernel: CPU features: detected: RAS Extension Support Nov 23 23:00:08.834811 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 23:00:08.834818 kernel: alternatives: applying system-wide alternatives Nov 23 23:00:08.834826 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 23 23:00:08.834834 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Nov 23 23:00:08.834842 kernel: devtmpfs: initialized Nov 23 23:00:08.834850 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 23:00:08.834859 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 23:00:08.834867 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 23:00:08.834875 kernel: 0 pages in range for non-PLT usage Nov 23 23:00:08.834882 kernel: 508400 pages in range for PLT usage Nov 23 23:00:08.834890 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 23:00:08.834898 kernel: SMBIOS 3.0.0 present. Nov 23 23:00:08.834905 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 23 23:00:08.834913 kernel: DMI: Memory slots populated: 1/1 Nov 23 23:00:08.834921 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 23:00:08.834930 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 23:00:08.834938 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 23:00:08.834946 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 23:00:08.834953 kernel: audit: initializing netlink subsys (disabled) Nov 23 23:00:08.834961 kernel: audit: type=2000 audit(0.014:1): state=initialized audit_enabled=0 res=1 Nov 23 23:00:08.835007 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 23:00:08.835015 kernel: cpuidle: using governor menu Nov 23 23:00:08.835023 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 23:00:08.835031 kernel: ASID allocator initialised with 32768 entries Nov 23 23:00:08.835042 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 23:00:08.835050 kernel: Serial: AMBA PL011 UART driver Nov 23 23:00:08.835058 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 23:00:08.835066 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 23:00:08.835074 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 23:00:08.835083 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 23:00:08.835090 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 23:00:08.835098 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 23:00:08.835106 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 23:00:08.835115 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 23:00:08.835122 kernel: ACPI: Added _OSI(Module Device) Nov 23 23:00:08.835130 kernel: ACPI: Added _OSI(Processor Device) Nov 23 23:00:08.835137 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 23:00:08.835145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 23:00:08.835152 kernel: ACPI: Interpreter enabled Nov 23 23:00:08.835160 kernel: ACPI: Using GIC for interrupt routing Nov 23 23:00:08.835167 kernel: ACPI: MCFG table detected, 1 entries Nov 23 23:00:08.835175 kernel: ACPI: CPU0 has been hot-added Nov 23 23:00:08.835184 kernel: ACPI: CPU1 has been hot-added Nov 23 23:00:08.835192 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 23:00:08.835199 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 23:00:08.835207 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 23:00:08.838012 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 23:00:08.838103 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 23:00:08.838167 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 23:00:08.838233 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 23:00:08.838308 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 23:00:08.838320 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 23:00:08.838328 kernel: PCI host bridge to bus 0000:00 Nov 23 23:00:08.838402 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 23:00:08.838460 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 23:00:08.838514 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 23:00:08.838566 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 23:00:08.838650 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 23:00:08.838730 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Nov 23 23:00:08.838794 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Nov 23 23:00:08.838856 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Nov 23 23:00:08.838925 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.839005 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Nov 23 23:00:08.839075 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 23:00:08.839136 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 23:00:08.839197 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Nov 23 23:00:08.839264 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.840415 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Nov 23 23:00:08.840496 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 23:00:08.840559 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 23:00:08.840639 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.840702 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Nov 23 23:00:08.840762 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 23:00:08.840823 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 23:00:08.840886 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Nov 23 23:00:08.840959 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.841046 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Nov 23 23:00:08.841112 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 23:00:08.841173 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 23:00:08.841237 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Nov 23 23:00:08.841739 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.841823 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Nov 23 23:00:08.841888 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 23:00:08.841950 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 23:00:08.842076 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Nov 23 23:00:08.842153 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.842218 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Nov 23 23:00:08.842282 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 23:00:08.842451 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Nov 23 23:00:08.842527 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Nov 23 23:00:08.842601 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.842673 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Nov 23 23:00:08.842734 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 23:00:08.842830 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Nov 23 23:00:08.842896 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Nov 23 23:00:08.842985 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.843052 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Nov 23 23:00:08.843118 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 23:00:08.843178 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Nov 23 23:00:08.843246 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 23:00:08.843325 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Nov 23 23:00:08.843389 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 23:00:08.843449 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 23:00:08.843521 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Nov 23 23:00:08.843585 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Nov 23 23:00:08.843661 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 23:00:08.843726 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Nov 23 23:00:08.843807 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 23:00:08.843872 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 23:00:08.843947 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 23 23:00:08.844025 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Nov 23 23:00:08.844103 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 23 23:00:08.844168 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Nov 23 23:00:08.844230 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Nov 23 23:00:08.844328 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 23 23:00:08.844402 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Nov 23 23:00:08.844476 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 23 23:00:08.844545 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Nov 23 23:00:08.844614 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Nov 23 23:00:08.844688 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 23 23:00:08.844753 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Nov 23 23:00:08.844816 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Nov 23 23:00:08.844889 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 23:00:08.844952 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Nov 23 23:00:08.845080 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Nov 23 23:00:08.845145 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 23:00:08.845211 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 23 23:00:08.845274 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 23 23:00:08.846432 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 23 23:00:08.846515 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 23 23:00:08.846598 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 23 23:00:08.846674 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 23 23:00:08.846743 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 23 23:00:08.846805 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 23 23:00:08.846878 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 23 23:00:08.846945 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 23 23:00:08.847053 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 23 23:00:08.847124 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 23 23:00:08.847191 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 23 23:00:08.850489 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 23 23:00:08.850577 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 23 23:00:08.850651 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 23 23:00:08.850715 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 23 23:00:08.850777 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 23 23:00:08.850854 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 23 23:00:08.850917 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 23 23:00:08.850997 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 23 23:00:08.851068 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 23 23:00:08.851130 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 23 23:00:08.851195 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 23 23:00:08.851261 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 23 23:00:08.851344 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 23 23:00:08.851407 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 23 23:00:08.851473 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Nov 23 23:00:08.851534 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Nov 23 23:00:08.851598 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Nov 23 23:00:08.851659 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Nov 23 23:00:08.851723 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Nov 23 23:00:08.851786 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Nov 23 23:00:08.851852 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Nov 23 23:00:08.851915 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Nov 23 23:00:08.852024 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Nov 23 23:00:08.852095 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Nov 23 23:00:08.852157 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Nov 23 23:00:08.852219 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Nov 23 23:00:08.852283 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Nov 23 23:00:08.853530 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Nov 23 23:00:08.853601 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Nov 23 23:00:08.853664 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Nov 23 23:00:08.853730 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Nov 23 23:00:08.853793 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Nov 23 23:00:08.853862 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Nov 23 23:00:08.853924 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Nov 23 23:00:08.854044 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Nov 23 23:00:08.854126 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 23 23:00:08.854194 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Nov 23 23:00:08.854257 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 23 23:00:08.855393 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Nov 23 23:00:08.855492 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 23 23:00:08.855560 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Nov 23 23:00:08.855623 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 23 23:00:08.855688 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Nov 23 23:00:08.855751 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 23 23:00:08.855815 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Nov 23 23:00:08.855875 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 23 23:00:08.855937 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Nov 23 23:00:08.856027 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 23 23:00:08.856097 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Nov 23 23:00:08.856160 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 23 23:00:08.856225 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Nov 23 23:00:08.856286 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Nov 23 23:00:08.857435 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Nov 23 23:00:08.857513 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Nov 23 23:00:08.857580 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 23:00:08.857650 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Nov 23 23:00:08.857724 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 23:00:08.857786 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 23 23:00:08.857849 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 23:00:08.857915 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 23:00:08.858044 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Nov 23 23:00:08.858116 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 23:00:08.858188 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 23 23:00:08.858251 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 23 23:00:08.858819 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 23:00:08.858910 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Nov 23 23:00:08.858997 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Nov 23 23:00:08.859068 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 23:00:08.859131 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 23 23:00:08.859198 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 23 23:00:08.860813 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 23:00:08.860905 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Nov 23 23:00:08.860992 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 23:00:08.861062 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 23 23:00:08.861124 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 23 23:00:08.861185 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 23:00:08.861261 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Nov 23 23:00:08.861362 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Nov 23 23:00:08.861433 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 23:00:08.861496 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 23 23:00:08.861559 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 23:00:08.861620 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 23:00:08.861688 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Nov 23 23:00:08.861756 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Nov 23 23:00:08.861821 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 23:00:08.861896 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 23 23:00:08.861958 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 23:00:08.862036 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 23:00:08.862110 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Nov 23 23:00:08.862174 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Nov 23 23:00:08.862241 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Nov 23 23:00:08.862338 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 23:00:08.862409 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 23 23:00:08.862473 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 23:00:08.862536 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 23:00:08.862603 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 23:00:08.862667 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 23 23:00:08.862727 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 23:00:08.862790 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 23:00:08.862855 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 23:00:08.862919 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 23 23:00:08.863026 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 23:00:08.863096 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 23:00:08.863164 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 23:00:08.863220 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 23:00:08.863277 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 23:00:08.863385 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 23 23:00:08.863449 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 23 23:00:08.863514 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 23:00:08.863581 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 23 23:00:08.863638 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 23 23:00:08.863694 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 23:00:08.863757 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 23 23:00:08.863814 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 23 23:00:08.863871 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 23:00:08.863942 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 23 23:00:08.864017 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 23 23:00:08.864077 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 23:00:08.864144 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 23 23:00:08.864202 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 23 23:00:08.864260 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 23:00:08.865448 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 23 23:00:08.865534 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 23 23:00:08.865593 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 23:00:08.865660 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 23 23:00:08.865720 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 23 23:00:08.865778 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 23:00:08.865853 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 23 23:00:08.865913 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 23 23:00:08.865991 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 23:00:08.866067 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 23 23:00:08.866128 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 23 23:00:08.866183 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 23:00:08.866193 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 23:00:08.866202 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 23:00:08.866213 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 23:00:08.866221 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 23:00:08.866229 kernel: iommu: Default domain type: Translated Nov 23 23:00:08.866238 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 23:00:08.866245 kernel: efivars: Registered efivars operations Nov 23 23:00:08.866253 kernel: vgaarb: loaded Nov 23 23:00:08.866261 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 23:00:08.866269 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 23:00:08.866278 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 23:00:08.866287 kernel: pnp: PnP ACPI init Nov 23 23:00:08.866407 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 23:00:08.866420 kernel: pnp: PnP ACPI: found 1 devices Nov 23 23:00:08.866429 kernel: NET: Registered PF_INET protocol family Nov 23 23:00:08.866437 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 23:00:08.866446 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 23:00:08.866454 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 23:00:08.866462 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 23:00:08.866474 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 23:00:08.866482 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 23:00:08.866490 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:00:08.866498 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 23:00:08.866506 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 23:00:08.866584 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 23 23:00:08.866595 kernel: PCI: CLS 0 bytes, default 64 Nov 23 23:00:08.866603 kernel: kvm [1]: HYP mode not available Nov 23 23:00:08.866611 kernel: Initialise system trusted keyrings Nov 23 23:00:08.866621 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 23:00:08.866629 kernel: Key type asymmetric registered Nov 23 23:00:08.866637 kernel: Asymmetric key parser 'x509' registered Nov 23 23:00:08.866645 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 23:00:08.866653 kernel: io scheduler mq-deadline registered Nov 23 23:00:08.866661 kernel: io scheduler kyber registered Nov 23 23:00:08.866669 kernel: io scheduler bfq registered Nov 23 23:00:08.866678 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 23:00:08.866744 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 23 23:00:08.866811 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 23 23:00:08.866885 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.866954 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 23 23:00:08.867036 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 23 23:00:08.867100 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.867166 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 23 23:00:08.867228 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 23 23:00:08.867536 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.867654 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 23 23:00:08.867719 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 23 23:00:08.867783 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.867848 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 23 23:00:08.867913 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 23 23:00:08.868022 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.868096 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 23 23:00:08.868162 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 23 23:00:08.868228 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.868317 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 23 23:00:08.868385 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 23 23:00:08.868449 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.868515 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 23 23:00:08.868579 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 23 23:00:08.868640 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.868654 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 23 23:00:08.868721 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 23 23:00:08.868784 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 23 23:00:08.868846 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 23:00:08.868857 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 23:00:08.868865 kernel: ACPI: button: Power Button [PWRB] Nov 23 23:00:08.868874 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 23:00:08.868943 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 23 23:00:08.869028 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 23 23:00:08.869041 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 23:00:08.869049 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 23:00:08.869114 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 23 23:00:08.869126 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 23 23:00:08.869134 kernel: thunder_xcv, ver 1.0 Nov 23 23:00:08.869142 kernel: thunder_bgx, ver 1.0 Nov 23 23:00:08.869150 kernel: nicpf, ver 1.0 Nov 23 23:00:08.869157 kernel: nicvf, ver 1.0 Nov 23 23:00:08.869238 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 23:00:08.869775 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T23:00:08 UTC (1763938808) Nov 23 23:00:08.869795 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 23:00:08.869804 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 23:00:08.869812 kernel: watchdog: NMI not fully supported Nov 23 23:00:08.869820 kernel: watchdog: Hard watchdog permanently disabled Nov 23 23:00:08.869827 kernel: NET: Registered PF_INET6 protocol family Nov 23 23:00:08.869835 kernel: Segment Routing with IPv6 Nov 23 23:00:08.869848 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 23:00:08.869855 kernel: NET: Registered PF_PACKET protocol family Nov 23 23:00:08.869863 kernel: Key type dns_resolver registered Nov 23 23:00:08.869872 kernel: registered taskstats version 1 Nov 23 23:00:08.869879 kernel: Loading compiled-in X.509 certificates Nov 23 23:00:08.869887 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 00c36da29593053a7da9cd3c5945ae69451ce339' Nov 23 23:00:08.869895 kernel: Demotion targets for Node 0: null Nov 23 23:00:08.869903 kernel: Key type .fscrypt registered Nov 23 23:00:08.869910 kernel: Key type fscrypt-provisioning registered Nov 23 23:00:08.869918 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 23:00:08.869928 kernel: ima: Allocated hash algorithm: sha1 Nov 23 23:00:08.869935 kernel: ima: No architecture policies found Nov 23 23:00:08.869943 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 23:00:08.869951 kernel: clk: Disabling unused clocks Nov 23 23:00:08.869959 kernel: PM: genpd: Disabling unused power domains Nov 23 23:00:08.870008 kernel: Warning: unable to open an initial console. Nov 23 23:00:08.870017 kernel: Freeing unused kernel memory: 39552K Nov 23 23:00:08.870025 kernel: Run /init as init process Nov 23 23:00:08.870033 kernel: with arguments: Nov 23 23:00:08.870044 kernel: /init Nov 23 23:00:08.870052 kernel: with environment: Nov 23 23:00:08.870060 kernel: HOME=/ Nov 23 23:00:08.870068 kernel: TERM=linux Nov 23 23:00:08.870077 systemd[1]: Successfully made /usr/ read-only. Nov 23 23:00:08.870089 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:00:08.870098 systemd[1]: Detected virtualization kvm. Nov 23 23:00:08.870109 systemd[1]: Detected architecture arm64. Nov 23 23:00:08.870117 systemd[1]: Running in initrd. Nov 23 23:00:08.870125 systemd[1]: No hostname configured, using default hostname. Nov 23 23:00:08.870134 systemd[1]: Hostname set to . Nov 23 23:00:08.870142 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:00:08.870151 systemd[1]: Queued start job for default target initrd.target. Nov 23 23:00:08.870160 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:00:08.870169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:00:08.870180 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 23:00:08.870188 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:00:08.870197 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 23:00:08.870206 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 23:00:08.870216 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 23:00:08.870225 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 23:00:08.870233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:00:08.870243 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:00:08.870252 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:00:08.870262 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:00:08.870271 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:00:08.870279 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:00:08.870302 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:00:08.870313 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:00:08.870322 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 23:00:08.870330 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 23:00:08.870342 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:00:08.870351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:00:08.870360 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:00:08.870368 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:00:08.870376 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 23:00:08.870385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:00:08.870394 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 23:00:08.870403 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 23:00:08.870414 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 23:00:08.870422 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:00:08.870431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:00:08.870440 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:08.870448 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 23:00:08.870457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:00:08.870467 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 23:00:08.870476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 23:00:08.870512 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 23:00:08.870535 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 23:00:08.870544 kernel: Bridge firewalling registered Nov 23 23:00:08.870553 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:00:08.870562 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:00:08.870571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:08.870579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 23:00:08.870588 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 23:00:08.870598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:00:08.870608 systemd-journald[245]: Journal started Nov 23 23:00:08.870627 systemd-journald[245]: Runtime Journal (/run/log/journal/89f14cf1cab54555a7e4650d176f54cd) is 8M, max 76.5M, 68.5M free. Nov 23 23:00:08.825712 systemd-modules-load[247]: Inserted module 'overlay' Nov 23 23:00:08.847655 systemd-modules-load[247]: Inserted module 'br_netfilter' Nov 23 23:00:08.876356 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:00:08.884080 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:00:08.888758 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:00:08.895793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:00:08.904356 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 23:00:08.905519 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:00:08.909677 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:00:08.912570 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 23:00:08.914715 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:00:08.945476 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=4db094b704dd398addf25219e01d6d8f197b31dbf6377199102cc61dad0e4bb2 Nov 23 23:00:08.965192 systemd-resolved[286]: Positive Trust Anchors: Nov 23 23:00:08.966145 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:00:08.966180 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:00:08.973522 systemd-resolved[286]: Defaulting to hostname 'linux'. Nov 23 23:00:08.975271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:00:08.976004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:00:09.054358 kernel: SCSI subsystem initialized Nov 23 23:00:09.060015 kernel: Loading iSCSI transport class v2.0-870. Nov 23 23:00:09.067356 kernel: iscsi: registered transport (tcp) Nov 23 23:00:09.082418 kernel: iscsi: registered transport (qla4xxx) Nov 23 23:00:09.082489 kernel: QLogic iSCSI HBA Driver Nov 23 23:00:09.109443 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:00:09.135454 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:00:09.137864 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:00:09.199718 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 23:00:09.204435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 23:00:09.269360 kernel: raid6: neonx8 gen() 15689 MB/s Nov 23 23:00:09.286358 kernel: raid6: neonx4 gen() 15739 MB/s Nov 23 23:00:09.303354 kernel: raid6: neonx2 gen() 13171 MB/s Nov 23 23:00:09.320359 kernel: raid6: neonx1 gen() 10417 MB/s Nov 23 23:00:09.337410 kernel: raid6: int64x8 gen() 6877 MB/s Nov 23 23:00:09.354464 kernel: raid6: int64x4 gen() 7305 MB/s Nov 23 23:00:09.371360 kernel: raid6: int64x2 gen() 6077 MB/s Nov 23 23:00:09.388353 kernel: raid6: int64x1 gen() 5030 MB/s Nov 23 23:00:09.388435 kernel: raid6: using algorithm neonx4 gen() 15739 MB/s Nov 23 23:00:09.405363 kernel: raid6: .... xor() 12286 MB/s, rmw enabled Nov 23 23:00:09.405440 kernel: raid6: using neon recovery algorithm Nov 23 23:00:09.411967 kernel: xor: measuring software checksum speed Nov 23 23:00:09.412047 kernel: 8regs : 21607 MB/sec Nov 23 23:00:09.412058 kernel: 32regs : 15945 MB/sec Nov 23 23:00:09.412078 kernel: arm64_neon : 28109 MB/sec Nov 23 23:00:09.412087 kernel: xor: using function: arm64_neon (28109 MB/sec) Nov 23 23:00:09.468398 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 23:00:09.476080 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:00:09.479089 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:00:09.505943 systemd-udevd[494]: Using default interface naming scheme 'v255'. Nov 23 23:00:09.510495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:00:09.515946 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 23:00:09.545254 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Nov 23 23:00:09.580389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:00:09.582795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:00:09.643919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:00:09.647945 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 23:00:09.740333 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 23 23:00:09.740531 kernel: scsi host0: Virtio SCSI HBA Nov 23 23:00:09.749678 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 23 23:00:09.749759 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 23 23:00:09.791016 kernel: ACPI: bus type USB registered Nov 23 23:00:09.791086 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 23 23:00:09.791356 kernel: usbcore: registered new interface driver usbfs Nov 23 23:00:09.791373 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 23 23:00:09.791482 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 23:00:09.795326 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 23 23:00:09.795818 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:00:09.797523 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:09.800913 kernel: usbcore: registered new interface driver hub Nov 23 23:00:09.799595 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:09.802892 kernel: usbcore: registered new device driver usb Nov 23 23:00:09.803825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:09.809168 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 23 23:00:09.809429 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 23 23:00:09.809513 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 23 23:00:09.809587 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 23 23:00:09.809660 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 23 23:00:09.818324 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 23:00:09.818384 kernel: GPT:17805311 != 80003071 Nov 23 23:00:09.818396 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 23:00:09.818406 kernel: GPT:17805311 != 80003071 Nov 23 23:00:09.818415 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 23:00:09.818095 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:09.821049 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:00:09.821103 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 23 23:00:09.833358 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 23:00:09.834784 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 23 23:00:09.839354 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 23 23:00:09.841490 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 23:00:09.841697 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 23 23:00:09.841779 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 23 23:00:09.844325 kernel: hub 1-0:1.0: USB hub found Nov 23 23:00:09.844535 kernel: hub 1-0:1.0: 4 ports detected Nov 23 23:00:09.846325 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 23 23:00:09.846522 kernel: hub 2-0:1.0: USB hub found Nov 23 23:00:09.846617 kernel: hub 2-0:1.0: 4 ports detected Nov 23 23:00:09.860320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:09.890712 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 23 23:00:09.904540 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 23:00:09.925712 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 23 23:00:09.942437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 23 23:00:09.944024 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 23 23:00:09.945822 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 23:00:09.951069 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:00:09.952623 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:00:09.953558 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:00:09.956665 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 23:00:09.958534 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 23:00:09.981559 disk-uuid[601]: Primary Header is updated. Nov 23 23:00:09.981559 disk-uuid[601]: Secondary Entries is updated. Nov 23 23:00:09.981559 disk-uuid[601]: Secondary Header is updated. Nov 23 23:00:09.988622 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:00:09.992574 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:00:10.087657 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 23 23:00:10.219176 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 23 23:00:10.219251 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 23 23:00:10.220512 kernel: usbcore: registered new interface driver usbhid Nov 23 23:00:10.220553 kernel: usbhid: USB HID core driver Nov 23 23:00:10.324436 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 23 23:00:10.458343 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 23 23:00:10.511337 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 23 23:00:11.019647 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 23:00:11.020235 disk-uuid[604]: The operation has completed successfully. Nov 23 23:00:11.087365 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 23:00:11.087509 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 23:00:11.114670 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 23:00:11.140308 sh[627]: Success Nov 23 23:00:11.156397 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 23:00:11.156500 kernel: device-mapper: uevent: version 1.0.3 Nov 23 23:00:11.156513 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 23:00:11.165400 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 23:00:11.216044 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 23:00:11.220450 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 23:00:11.235189 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 23:00:11.244358 kernel: BTRFS: device fsid 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (639) Nov 23 23:00:11.247343 kernel: BTRFS info (device dm-0): first mount of filesystem 5fd06d80-8dd4-4ca0-aa0c-93ddab5f4498 Nov 23 23:00:11.247407 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:11.253855 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 23:00:11.253945 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 23:00:11.254000 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 23:00:11.256856 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 23:00:11.257597 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:00:11.258697 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 23:00:11.259536 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 23:00:11.262528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 23:00:11.297351 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (671) Nov 23 23:00:11.298869 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:11.298924 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:11.306185 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 23:00:11.306284 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:00:11.306316 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:00:11.311330 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:11.314095 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 23:00:11.316532 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 23:00:11.401500 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:00:11.406683 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:00:11.446588 systemd-networkd[808]: lo: Link UP Nov 23 23:00:11.446600 systemd-networkd[808]: lo: Gained carrier Nov 23 23:00:11.448256 systemd-networkd[808]: Enumeration completed Nov 23 23:00:11.448401 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:00:11.449604 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.449609 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:11.450127 systemd[1]: Reached target network.target - Network. Nov 23 23:00:11.452415 systemd-networkd[808]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.452419 systemd-networkd[808]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:11.453706 systemd-networkd[808]: eth0: Link UP Nov 23 23:00:11.454162 systemd-networkd[808]: eth1: Link UP Nov 23 23:00:11.454358 systemd-networkd[808]: eth0: Gained carrier Nov 23 23:00:11.454372 systemd-networkd[808]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.459499 systemd-networkd[808]: eth1: Gained carrier Nov 23 23:00:11.459514 systemd-networkd[808]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:11.481420 systemd-networkd[808]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 23:00:11.482028 ignition[725]: Ignition 2.22.0 Nov 23 23:00:11.482035 ignition[725]: Stage: fetch-offline Nov 23 23:00:11.482068 ignition[725]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:11.482076 ignition[725]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:11.482185 ignition[725]: parsed url from cmdline: "" Nov 23 23:00:11.482189 ignition[725]: no config URL provided Nov 23 23:00:11.482196 ignition[725]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:00:11.482203 ignition[725]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:00:11.488482 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:00:11.482208 ignition[725]: failed to fetch config: resource requires networking Nov 23 23:00:11.490571 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 23:00:11.483253 ignition[725]: Ignition finished successfully Nov 23 23:00:11.517515 systemd-networkd[808]: eth0: DHCPv4 address 49.12.4.178/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 23:00:11.541905 ignition[819]: Ignition 2.22.0 Nov 23 23:00:11.541923 ignition[819]: Stage: fetch Nov 23 23:00:11.542116 ignition[819]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:11.542127 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:11.542268 ignition[819]: parsed url from cmdline: "" Nov 23 23:00:11.542272 ignition[819]: no config URL provided Nov 23 23:00:11.542277 ignition[819]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 23:00:11.542286 ignition[819]: no config at "/usr/lib/ignition/user.ign" Nov 23 23:00:11.542348 ignition[819]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 23 23:00:11.548426 ignition[819]: GET result: OK Nov 23 23:00:11.548786 ignition[819]: parsing config with SHA512: 62d9676f91b357bbcb8c2675fd2b877df8d928014907b595a4042d6184f293c96dbcb383022731ecde7153c333a01f4fe3ac7a1cf08e0bf12d4430883f8061eb Nov 23 23:00:11.559733 unknown[819]: fetched base config from "system" Nov 23 23:00:11.559744 unknown[819]: fetched base config from "system" Nov 23 23:00:11.560118 ignition[819]: fetch: fetch complete Nov 23 23:00:11.559749 unknown[819]: fetched user config from "hetzner" Nov 23 23:00:11.560123 ignition[819]: fetch: fetch passed Nov 23 23:00:11.563189 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 23:00:11.560176 ignition[819]: Ignition finished successfully Nov 23 23:00:11.566942 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 23:00:11.603788 ignition[825]: Ignition 2.22.0 Nov 23 23:00:11.603812 ignition[825]: Stage: kargs Nov 23 23:00:11.603999 ignition[825]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:11.604011 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:11.605203 ignition[825]: kargs: kargs passed Nov 23 23:00:11.605271 ignition[825]: Ignition finished successfully Nov 23 23:00:11.607938 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 23:00:11.610534 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 23:00:11.652712 ignition[831]: Ignition 2.22.0 Nov 23 23:00:11.652725 ignition[831]: Stage: disks Nov 23 23:00:11.652870 ignition[831]: no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:11.652879 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:11.653758 ignition[831]: disks: disks passed Nov 23 23:00:11.653812 ignition[831]: Ignition finished successfully Nov 23 23:00:11.657338 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 23:00:11.658450 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 23:00:11.659594 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 23:00:11.660879 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:00:11.661991 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:00:11.662911 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:00:11.664859 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 23:00:11.697782 systemd-fsck[840]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 23 23:00:11.703120 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 23:00:11.705453 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 23:00:11.784339 kernel: EXT4-fs (sda9): mounted filesystem fa3f8731-d4e3-4e51-b6db-fa404206cf07 r/w with ordered data mode. Quota mode: none. Nov 23 23:00:11.786648 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 23:00:11.788536 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 23:00:11.791158 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:00:11.795408 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 23:00:11.802706 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 23 23:00:11.806516 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 23:00:11.806560 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:00:11.813173 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 23:00:11.816428 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 23:00:11.819788 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (848) Nov 23 23:00:11.825315 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:11.825368 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:11.837477 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 23:00:11.837547 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:00:11.837561 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:00:11.845670 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:00:11.882403 initrd-setup-root[875]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 23:00:11.892612 coreos-metadata[850]: Nov 23 23:00:11.892 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 23 23:00:11.894834 initrd-setup-root[882]: cut: /sysroot/etc/group: No such file or directory Nov 23 23:00:11.895803 coreos-metadata[850]: Nov 23 23:00:11.895 INFO Fetch successful Nov 23 23:00:11.895803 coreos-metadata[850]: Nov 23 23:00:11.895 INFO wrote hostname ci-4459-2-1-d-6a40a07c08 to /sysroot/etc/hostname Nov 23 23:00:11.899497 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:00:11.901760 initrd-setup-root[889]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 23:00:11.906237 initrd-setup-root[897]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 23:00:12.021881 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 23:00:12.024164 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 23:00:12.025672 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 23:00:12.048446 kernel: BTRFS info (device sda6): last unmount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:12.074215 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 23:00:12.090732 ignition[965]: INFO : Ignition 2.22.0 Nov 23 23:00:12.090732 ignition[965]: INFO : Stage: mount Nov 23 23:00:12.093363 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:12.093363 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:12.093363 ignition[965]: INFO : mount: mount passed Nov 23 23:00:12.093363 ignition[965]: INFO : Ignition finished successfully Nov 23 23:00:12.095194 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 23:00:12.098044 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 23:00:12.246542 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 23:00:12.250042 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 23:00:12.274363 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (977) Nov 23 23:00:12.277526 kernel: BTRFS info (device sda6): first mount of filesystem fbc9a6bc-8b9c-4847-949c-e8c4f3bf01b3 Nov 23 23:00:12.277696 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 23:00:12.281683 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 23:00:12.281736 kernel: BTRFS info (device sda6): turning on async discard Nov 23 23:00:12.281752 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 23:00:12.285413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 23:00:12.319923 ignition[995]: INFO : Ignition 2.22.0 Nov 23 23:00:12.319923 ignition[995]: INFO : Stage: files Nov 23 23:00:12.321229 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:12.321229 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:12.321229 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Nov 23 23:00:12.324596 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 23:00:12.324596 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 23:00:12.326601 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 23:00:12.327489 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 23:00:12.328762 unknown[995]: wrote ssh authorized keys file for user: core Nov 23 23:00:12.330085 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 23:00:12.331569 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:00:12.332820 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 23:00:12.413225 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 23:00:12.525467 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 23:00:12.526840 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 23:00:12.526840 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 23:00:12.526840 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:00:12.526840 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 23:00:12.526840 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:00:12.536384 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 23:00:12.846420 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 23:00:12.988675 systemd-networkd[808]: eth1: Gained IPv6LL Nov 23 23:00:13.180678 systemd-networkd[808]: eth0: Gained IPv6LL Nov 23 23:00:13.411276 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 23:00:13.412986 ignition[995]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 23:00:13.414488 ignition[995]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 23 23:00:13.418019 ignition[995]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 23:00:13.437052 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:00:13.437052 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 23:00:13.437052 ignition[995]: INFO : files: files passed Nov 23 23:00:13.437052 ignition[995]: INFO : Ignition finished successfully Nov 23 23:00:13.428828 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 23:00:13.433208 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 23:00:13.438286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 23:00:13.456378 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 23:00:13.457324 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 23:00:13.464248 initrd-setup-root-after-ignition[1024]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:00:13.464248 initrd-setup-root-after-ignition[1024]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:00:13.467817 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 23:00:13.471586 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:00:13.473675 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 23:00:13.476846 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 23:00:13.532614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 23:00:13.532792 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 23:00:13.535202 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 23:00:13.536387 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 23:00:13.537172 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 23:00:13.538153 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 23:00:13.582122 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:00:13.585457 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 23:00:13.614596 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:00:13.616201 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:00:13.617903 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 23:00:13.618600 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 23:00:13.618738 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 23:00:13.620950 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 23:00:13.621794 systemd[1]: Stopped target basic.target - Basic System. Nov 23 23:00:13.623116 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 23:00:13.624547 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 23:00:13.625686 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 23:00:13.627162 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 23:00:13.628506 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 23:00:13.629803 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 23:00:13.631127 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 23:00:13.632243 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 23:00:13.633641 systemd[1]: Stopped target swap.target - Swaps. Nov 23 23:00:13.634618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 23:00:13.634759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 23:00:13.636280 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:00:13.637010 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:00:13.638056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 23:00:13.638523 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:00:13.639279 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 23:00:13.639425 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 23:00:13.641032 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 23:00:13.641163 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 23:00:13.642477 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 23:00:13.642621 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 23:00:13.643753 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 23 23:00:13.643855 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 23:00:13.645843 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 23:00:13.647465 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 23:00:13.647615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:00:13.652640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 23:00:13.653231 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 23:00:13.653435 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:00:13.655682 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 23:00:13.655806 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 23:00:13.661668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 23:00:13.661802 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 23:00:13.684154 ignition[1048]: INFO : Ignition 2.22.0 Nov 23 23:00:13.684154 ignition[1048]: INFO : Stage: umount Nov 23 23:00:13.687553 ignition[1048]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 23:00:13.687553 ignition[1048]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 23:00:13.687553 ignition[1048]: INFO : umount: umount passed Nov 23 23:00:13.687553 ignition[1048]: INFO : Ignition finished successfully Nov 23 23:00:13.687381 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 23:00:13.689032 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 23:00:13.694902 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 23:00:13.699597 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 23:00:13.699727 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 23:00:13.701586 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 23:00:13.701657 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 23:00:13.703012 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 23:00:13.703060 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 23:00:13.704486 systemd[1]: Stopped target network.target - Network. Nov 23 23:00:13.705385 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 23:00:13.705448 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 23:00:13.706580 systemd[1]: Stopped target paths.target - Path Units. Nov 23 23:00:13.709881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 23:00:13.713430 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:00:13.716239 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 23:00:13.717262 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 23:00:13.718596 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 23:00:13.718648 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 23:00:13.719986 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 23:00:13.720039 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 23:00:13.722108 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 23:00:13.722183 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 23:00:13.728628 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 23:00:13.728742 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 23:00:13.731569 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 23:00:13.732251 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 23:00:13.742647 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 23:00:13.742818 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 23:00:13.744497 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 23:00:13.744612 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 23:00:13.750149 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 23:00:13.750436 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 23:00:13.750540 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 23:00:13.754385 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 23:00:13.755664 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 23:00:13.757190 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 23:00:13.757234 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:00:13.758024 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 23:00:13.758090 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 23:00:13.760218 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 23:00:13.763054 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 23:00:13.763147 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 23:00:13.764536 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 23:00:13.764595 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:00:13.768001 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 23:00:13.768135 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 23:00:13.771099 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 23:00:13.771176 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:00:13.773955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:00:13.777588 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 23:00:13.777685 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:13.791223 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 23:00:13.794303 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:00:13.796043 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 23:00:13.796112 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 23:00:13.797451 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 23:00:13.797494 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:00:13.798627 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 23:00:13.798680 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 23:00:13.800731 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 23:00:13.800789 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 23:00:13.802500 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 23:00:13.802559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 23:00:13.805024 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 23:00:13.807358 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 23:00:13.807446 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:00:13.808349 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 23:00:13.808403 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:00:13.810441 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:00:13.810490 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:13.816006 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 23:00:13.816094 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 23:00:13.816132 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 23:00:13.816546 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 23:00:13.820533 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 23:00:13.828897 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 23:00:13.829132 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 23:00:13.831880 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 23:00:13.834539 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 23:00:13.860046 systemd[1]: Switching root. Nov 23 23:00:13.908032 systemd-journald[245]: Journal stopped Nov 23 23:00:14.852721 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 23:00:14.852801 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 23:00:14.852815 kernel: SELinux: policy capability open_perms=1 Nov 23 23:00:14.852826 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 23:00:14.852839 kernel: SELinux: policy capability always_check_network=0 Nov 23 23:00:14.852849 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 23:00:14.852859 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 23:00:14.852868 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 23:00:14.852880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 23:00:14.852890 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 23:00:14.852901 kernel: audit: type=1403 audit(1763938814.058:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 23:00:14.852912 systemd[1]: Successfully loaded SELinux policy in 66.143ms. Nov 23 23:00:14.852950 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.609ms. Nov 23 23:00:14.852965 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 23:00:14.852976 systemd[1]: Detected virtualization kvm. Nov 23 23:00:14.852987 systemd[1]: Detected architecture arm64. Nov 23 23:00:14.852997 systemd[1]: Detected first boot. Nov 23 23:00:14.853008 systemd[1]: Hostname set to . Nov 23 23:00:14.853020 systemd[1]: Initializing machine ID from VM UUID. Nov 23 23:00:14.853031 zram_generator::config[1091]: No configuration found. Nov 23 23:00:14.853042 kernel: NET: Registered PF_VSOCK protocol family Nov 23 23:00:14.853052 systemd[1]: Populated /etc with preset unit settings. Nov 23 23:00:14.853064 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 23:00:14.853074 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 23:00:14.853090 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 23:00:14.853102 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 23:00:14.853113 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 23:00:14.853123 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 23:00:14.853133 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 23:00:14.853145 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 23:00:14.853155 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 23:00:14.853166 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 23:00:14.853177 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 23:00:14.853188 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 23:00:14.853202 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 23:00:14.853213 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 23:00:14.853223 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 23:00:14.853234 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 23:00:14.853244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 23:00:14.853255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 23:00:14.853267 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 23:00:14.853278 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 23:00:14.853302 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 23:00:14.853316 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 23:00:14.853327 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 23:00:14.853337 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 23:00:14.853347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 23:00:14.853360 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 23:00:14.853371 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 23:00:14.853381 systemd[1]: Reached target slices.target - Slice Units. Nov 23 23:00:14.853392 systemd[1]: Reached target swap.target - Swaps. Nov 23 23:00:14.853402 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 23:00:14.853413 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 23:00:14.853425 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 23:00:14.853436 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 23:00:14.853447 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 23:00:14.853459 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 23:00:14.853470 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 23:00:14.853481 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 23:00:14.853491 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 23:00:14.853502 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 23:00:14.853513 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 23:00:14.853526 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 23:00:14.853537 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 23:00:14.853549 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 23:00:14.853561 systemd[1]: Reached target machines.target - Containers. Nov 23 23:00:14.853572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 23:00:14.853583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:14.853594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 23:00:14.853604 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 23:00:14.853615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:00:14.853626 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:00:14.853636 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:00:14.853648 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 23:00:14.853658 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:00:14.853670 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 23:00:14.853682 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 23:00:14.853694 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 23:00:14.853704 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 23:00:14.853715 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 23:00:14.853727 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:14.853739 kernel: fuse: init (API version 7.41) Nov 23 23:00:14.853749 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 23:00:14.853764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 23:00:14.853775 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 23:00:14.853787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 23:00:14.853800 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 23:00:14.853811 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 23:00:14.853824 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 23:00:14.853834 systemd[1]: Stopped verity-setup.service. Nov 23 23:00:14.853845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 23:00:14.853857 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 23:00:14.853868 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 23:00:14.853879 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 23:00:14.853889 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 23:00:14.853900 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 23:00:14.853911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 23:00:14.853921 kernel: loop: module loaded Nov 23 23:00:14.853941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 23:00:14.853954 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 23:00:14.854009 systemd-journald[1159]: Collecting audit messages is disabled. Nov 23 23:00:14.854044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:00:14.854061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:00:14.854075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:00:14.854090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:00:14.854103 systemd-journald[1159]: Journal started Nov 23 23:00:14.854125 systemd-journald[1159]: Runtime Journal (/run/log/journal/89f14cf1cab54555a7e4650d176f54cd) is 8M, max 76.5M, 68.5M free. Nov 23 23:00:14.590663 systemd[1]: Queued start job for default target multi-user.target. Nov 23 23:00:14.858524 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 23:00:14.612857 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 23 23:00:14.613771 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 23:00:14.858279 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 23:00:14.858482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 23:00:14.859685 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:00:14.859850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:00:14.861345 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 23:00:14.863758 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 23:00:14.886911 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 23:00:14.888146 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 23:00:14.890973 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 23:00:14.897474 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 23:00:14.902430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 23:00:14.903090 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 23:00:14.903130 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 23:00:14.905395 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 23:00:14.916574 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 23:00:14.917373 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:14.920729 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 23:00:14.924345 kernel: ACPI: bus type drm_connector registered Nov 23 23:00:14.924694 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 23:00:14.925407 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:00:14.928646 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 23:00:14.929394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:00:14.930556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 23:00:14.938124 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 23:00:14.948204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 23:00:14.951435 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:00:14.951955 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:00:14.953270 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 23:00:14.956886 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 23:00:14.962978 systemd-journald[1159]: Time spent on flushing to /var/log/journal/89f14cf1cab54555a7e4650d176f54cd is 62.308ms for 1170 entries. Nov 23 23:00:14.962978 systemd-journald[1159]: System Journal (/var/log/journal/89f14cf1cab54555a7e4650d176f54cd) is 8M, max 584.8M, 576.8M free. Nov 23 23:00:15.046461 systemd-journald[1159]: Received client request to flush runtime journal. Nov 23 23:00:15.046539 kernel: loop0: detected capacity change from 0 to 100632 Nov 23 23:00:15.046563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 23:00:14.963226 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 23:00:14.992900 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 23:00:14.995689 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 23:00:15.003706 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 23:00:15.016842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 23:00:15.051998 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 23:00:15.069641 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 23:00:15.074352 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 23:00:15.080372 kernel: loop1: detected capacity change from 0 to 119840 Nov 23 23:00:15.081127 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 23:00:15.104244 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 23:00:15.117405 kernel: loop2: detected capacity change from 0 to 8 Nov 23 23:00:15.134369 kernel: loop3: detected capacity change from 0 to 207008 Nov 23 23:00:15.137090 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Nov 23 23:00:15.137704 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Nov 23 23:00:15.146821 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 23:00:15.188347 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 23:00:15.209345 kernel: loop5: detected capacity change from 0 to 119840 Nov 23 23:00:15.241338 kernel: loop6: detected capacity change from 0 to 8 Nov 23 23:00:15.244347 kernel: loop7: detected capacity change from 0 to 207008 Nov 23 23:00:15.264619 (sd-merge)[1234]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 23 23:00:15.265164 (sd-merge)[1234]: Merged extensions into '/usr'. Nov 23 23:00:15.272622 systemd[1]: Reload requested from client PID 1208 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 23:00:15.272785 systemd[1]: Reloading... Nov 23 23:00:15.375317 zram_generator::config[1260]: No configuration found. Nov 23 23:00:15.570953 ldconfig[1203]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 23:00:15.642862 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 23:00:15.643311 systemd[1]: Reloading finished in 370 ms. Nov 23 23:00:15.686446 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 23:00:15.687620 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 23:00:15.688877 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 23:00:15.702765 systemd[1]: Starting ensure-sysext.service... Nov 23 23:00:15.708497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 23:00:15.714485 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 23:00:15.724986 systemd[1]: Reload requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Nov 23 23:00:15.725010 systemd[1]: Reloading... Nov 23 23:00:15.748412 systemd-udevd[1300]: Using default interface naming scheme 'v255'. Nov 23 23:00:15.756181 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 23:00:15.756223 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 23:00:15.756511 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 23:00:15.756718 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 23:00:15.761658 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 23:00:15.762037 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 23 23:00:15.762083 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Nov 23 23:00:15.768667 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:00:15.769565 systemd-tmpfiles[1299]: Skipping /boot Nov 23 23:00:15.781564 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 23:00:15.781711 systemd-tmpfiles[1299]: Skipping /boot Nov 23 23:00:15.837398 zram_generator::config[1329]: No configuration found. Nov 23 23:00:16.076822 systemd[1]: Reloading finished in 349 ms. Nov 23 23:00:16.087276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 23:00:16.096248 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 23:00:16.104797 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 23:00:16.107567 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:00:16.113793 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 23:00:16.117098 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 23:00:16.121340 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 23:00:16.124606 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 23:00:16.133567 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 23:00:16.142004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:16.144359 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:00:16.148505 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:00:16.150364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:00:16.151489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:16.151643 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:16.155621 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 23:00:16.159232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:16.159430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:16.159522 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:16.162932 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:16.167587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 23:00:16.168564 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:16.168700 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:16.189624 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 23:00:16.201328 kernel: mousedev: PS/2 mouse device common for all mice Nov 23 23:00:16.206325 systemd[1]: Finished ensure-sysext.service. Nov 23 23:00:16.211633 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 23:00:16.228695 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 23:00:16.234025 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 23:00:16.248396 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 23:00:16.250018 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:00:16.250649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:00:16.253303 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:00:16.258877 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 23:00:16.262403 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 23:00:16.279681 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:00:16.279905 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:00:16.281861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:00:16.283326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:00:16.285640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:00:16.285729 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:00:16.293153 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 23 23:00:16.296322 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 23:00:16.298742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 23:00:16.302351 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 23:00:16.309561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 23:00:16.313624 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 23:00:16.315490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 23:00:16.315530 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 23:00:16.315555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 23:00:16.328353 augenrules[1454]: No rules Nov 23 23:00:16.330536 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:00:16.330762 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:00:16.342997 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 23:00:16.344825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 23:00:16.350742 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 23:00:16.350991 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 23:00:16.352088 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 23:00:16.353815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 23:00:16.354028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 23:00:16.357535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 23:00:16.393751 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 23:00:16.404398 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 23 23:00:16.411323 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 23 23:00:16.411398 kernel: [drm] features: -context_init Nov 23 23:00:16.417625 kernel: [drm] number of scanouts: 1 Nov 23 23:00:16.417713 kernel: [drm] number of cap sets: 0 Nov 23 23:00:16.443327 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 23 23:00:16.479599 kernel: Console: switching to colour frame buffer device 160x50 Nov 23 23:00:16.493326 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 23 23:00:16.495466 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 23:00:16.498869 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 23:00:16.542065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:16.547905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 23:00:16.567625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 23:00:16.568603 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:16.574185 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 23:00:16.666455 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 23:00:16.667377 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 23:00:16.671264 systemd-networkd[1408]: lo: Link UP Nov 23 23:00:16.672253 systemd-networkd[1408]: lo: Gained carrier Nov 23 23:00:16.678017 systemd-networkd[1408]: Enumeration completed Nov 23 23:00:16.678090 systemd-timesyncd[1426]: No network connectivity, watching for changes. Nov 23 23:00:16.678650 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 23:00:16.680786 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:16.680795 systemd-networkd[1408]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:16.681504 systemd-networkd[1408]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:16.681508 systemd-networkd[1408]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 23:00:16.682115 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 23:00:16.682663 systemd-networkd[1408]: eth0: Link UP Nov 23 23:00:16.684873 systemd-networkd[1408]: eth0: Gained carrier Nov 23 23:00:16.684904 systemd-networkd[1408]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:16.686149 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 23:00:16.694618 systemd-networkd[1408]: eth1: Link UP Nov 23 23:00:16.697854 systemd-networkd[1408]: eth1: Gained carrier Nov 23 23:00:16.697886 systemd-networkd[1408]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 23:00:16.707745 systemd-resolved[1409]: Positive Trust Anchors: Nov 23 23:00:16.708102 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 23:00:16.708179 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 23:00:16.722195 systemd-resolved[1409]: Using system hostname 'ci-4459-2-1-d-6a40a07c08'. Nov 23 23:00:16.724402 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 23:00:16.730607 systemd-networkd[1408]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 23:00:16.730618 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 23:00:16.731579 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Nov 23 23:00:16.731886 systemd[1]: Reached target network.target - Network. Nov 23 23:00:16.733375 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 23:00:16.734055 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 23:00:16.734853 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 23:00:16.735650 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 23:00:16.736524 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 23:00:16.737387 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 23:00:16.738368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 23:00:16.739411 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 23:00:16.739453 systemd[1]: Reached target paths.target - Path Units. Nov 23 23:00:16.740097 systemd[1]: Reached target timers.target - Timer Units. Nov 23 23:00:16.742471 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 23:00:16.745646 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 23:00:16.749524 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 23:00:16.751451 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 23:00:16.752235 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 23:00:16.755547 systemd-networkd[1408]: eth0: DHCPv4 address 49.12.4.178/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 23:00:16.756117 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Nov 23 23:00:16.758669 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Nov 23 23:00:16.760114 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 23:00:16.763123 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 23:00:16.765151 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 23:00:16.766243 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 23:00:16.767728 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 23:00:16.768541 systemd[1]: Reached target basic.target - Basic System. Nov 23 23:00:16.769177 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:00:16.769224 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 23:00:16.770609 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 23:00:16.773483 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 23:00:16.776575 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 23:00:16.783553 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 23:00:16.787401 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 23:00:16.791226 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 23:00:16.791964 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 23:00:16.794534 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 23:00:16.802001 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 23:00:16.809001 jq[1510]: false Nov 23 23:00:16.812347 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 23 23:00:16.817844 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 23:00:16.822862 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 23:00:16.831138 coreos-metadata[1507]: Nov 23 23:00:16.831 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 23 23:00:16.831739 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 23:00:16.835048 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 23:00:16.835629 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 23:00:16.837375 coreos-metadata[1507]: Nov 23 23:00:16.837 INFO Fetch successful Nov 23 23:00:16.838680 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 23:00:16.838880 coreos-metadata[1507]: Nov 23 23:00:16.838 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 23 23:00:16.843482 coreos-metadata[1507]: Nov 23 23:00:16.842 INFO Fetch successful Nov 23 23:00:16.844468 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 23:00:16.853932 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 23:00:16.855130 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 23:00:16.855379 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 23:00:16.865259 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 23:00:16.865979 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 23:00:16.873577 extend-filesystems[1513]: Found /dev/sda6 Nov 23 23:00:16.880261 tar[1529]: linux-arm64/LICENSE Nov 23 23:00:16.880261 tar[1529]: linux-arm64/helm Nov 23 23:00:16.893176 extend-filesystems[1513]: Found /dev/sda9 Nov 23 23:00:16.904156 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 23:00:16.905235 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 23:00:16.909069 extend-filesystems[1513]: Checking size of /dev/sda9 Nov 23 23:00:16.910454 jq[1523]: true Nov 23 23:00:16.937384 dbus-daemon[1508]: [system] SELinux support is enabled Nov 23 23:00:16.937586 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 23:00:16.942175 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 23:00:16.942219 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 23:00:16.943582 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 23:00:16.943603 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 23:00:16.950326 update_engine[1521]: I20251123 23:00:16.948895 1521 main.cc:92] Flatcar Update Engine starting Nov 23 23:00:16.966869 jq[1551]: true Nov 23 23:00:16.967667 systemd[1]: Started update-engine.service - Update Engine. Nov 23 23:00:16.970070 update_engine[1521]: I20251123 23:00:16.969877 1521 update_check_scheduler.cc:74] Next update check in 6m4s Nov 23 23:00:16.972533 extend-filesystems[1513]: Resized partition /dev/sda9 Nov 23 23:00:16.971158 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 23:00:16.983722 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 23:00:16.986750 extend-filesystems[1562]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 23:00:17.001330 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 23 23:00:17.090650 systemd-logind[1519]: New seat seat0. Nov 23 23:00:17.095486 systemd-logind[1519]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 23:00:17.096628 systemd-logind[1519]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 23 23:00:17.096875 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 23:00:17.127755 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 23:00:17.129621 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 23:00:17.145530 bash[1582]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:00:17.153954 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 23:00:17.158801 systemd[1]: Starting sshkeys.service... Nov 23 23:00:17.209314 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 23 23:00:17.229535 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 23:00:17.233673 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 23:00:17.256968 extend-filesystems[1562]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 23 23:00:17.256968 extend-filesystems[1562]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 23 23:00:17.256968 extend-filesystems[1562]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 23 23:00:17.262277 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Nov 23 23:00:17.261871 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 23:00:17.262188 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 23:00:17.280809 coreos-metadata[1595]: Nov 23 23:00:17.280 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 23 23:00:17.282651 coreos-metadata[1595]: Nov 23 23:00:17.282 INFO Fetch successful Nov 23 23:00:17.287542 unknown[1595]: wrote ssh authorized keys file for user: core Nov 23 23:00:17.320988 update-ssh-keys[1603]: Updated "/home/core/.ssh/authorized_keys" Nov 23 23:00:17.322371 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 23:00:17.329784 systemd[1]: Finished sshkeys.service. Nov 23 23:00:17.347862 locksmithd[1563]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 23:00:17.372166 containerd[1555]: time="2025-11-23T23:00:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 23:00:17.374305 containerd[1555]: time="2025-11-23T23:00:17.372935680Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 23:00:17.385227 containerd[1555]: time="2025-11-23T23:00:17.385173640Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.72µs" Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388321720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388365480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388531320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388549200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388577800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388633200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388645640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388877040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388891200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388901880Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.388955080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389326 containerd[1555]: time="2025-11-23T23:00:17.389045440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389605 containerd[1555]: time="2025-11-23T23:00:17.389232080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389605 containerd[1555]: time="2025-11-23T23:00:17.389260840Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 23:00:17.389605 containerd[1555]: time="2025-11-23T23:00:17.389278600Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 23:00:17.393406 containerd[1555]: time="2025-11-23T23:00:17.393364440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 23:00:17.393736 containerd[1555]: time="2025-11-23T23:00:17.393714840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 23:00:17.393839 containerd[1555]: time="2025-11-23T23:00:17.393817440Z" level=info msg="metadata content store policy set" policy=shared Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412273440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412423200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412456840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412494360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412521920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412546200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412588480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412615680Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412656360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412682120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412706720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.412741280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.413030200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 23:00:17.413354 containerd[1555]: time="2025-11-23T23:00:17.413075240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413105200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413133160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413159640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413184080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413209960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413232920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 23:00:17.414184 containerd[1555]: time="2025-11-23T23:00:17.413259120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 23:00:17.416341 containerd[1555]: time="2025-11-23T23:00:17.413282600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 23:00:17.416341 containerd[1555]: time="2025-11-23T23:00:17.415353840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 23:00:17.416341 containerd[1555]: time="2025-11-23T23:00:17.415623200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 23:00:17.416341 containerd[1555]: time="2025-11-23T23:00:17.415660360Z" level=info msg="Start snapshots syncer" Nov 23 23:00:17.416341 containerd[1555]: time="2025-11-23T23:00:17.415706480Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 23:00:17.417508 containerd[1555]: time="2025-11-23T23:00:17.416249680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 23:00:17.417789 containerd[1555]: time="2025-11-23T23:00:17.417710640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 23:00:17.418163 containerd[1555]: time="2025-11-23T23:00:17.418141520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 23:00:17.418498 containerd[1555]: time="2025-11-23T23:00:17.418477280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 23:00:17.418631 containerd[1555]: time="2025-11-23T23:00:17.418567960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 23:00:17.418631 containerd[1555]: time="2025-11-23T23:00:17.418583040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 23:00:17.418631 containerd[1555]: time="2025-11-23T23:00:17.418593760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 23:00:17.418631 containerd[1555]: time="2025-11-23T23:00:17.418606640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 23:00:17.418810 containerd[1555]: time="2025-11-23T23:00:17.418736320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 23:00:17.418810 containerd[1555]: time="2025-11-23T23:00:17.418756360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 23:00:17.418810 containerd[1555]: time="2025-11-23T23:00:17.418788320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 23:00:17.418951 containerd[1555]: time="2025-11-23T23:00:17.418799080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 23:00:17.418951 containerd[1555]: time="2025-11-23T23:00:17.418893480Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 23:00:17.419172 containerd[1555]: time="2025-11-23T23:00:17.419080760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:00:17.419367 containerd[1555]: time="2025-11-23T23:00:17.419161200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 23:00:17.419367 containerd[1555]: time="2025-11-23T23:00:17.419228720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:00:17.419367 containerd[1555]: time="2025-11-23T23:00:17.419240400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 23:00:17.419367 containerd[1555]: time="2025-11-23T23:00:17.419248320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 23:00:17.419367 containerd[1555]: time="2025-11-23T23:00:17.419257840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 23:00:17.419367 containerd[1555]: time="2025-11-23T23:00:17.419270480Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 23:00:17.419575 containerd[1555]: time="2025-11-23T23:00:17.419501560Z" level=info msg="runtime interface created" Nov 23 23:00:17.419575 containerd[1555]: time="2025-11-23T23:00:17.419514520Z" level=info msg="created NRI interface" Nov 23 23:00:17.419575 containerd[1555]: time="2025-11-23T23:00:17.419525120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 23:00:17.419575 containerd[1555]: time="2025-11-23T23:00:17.419541560Z" level=info msg="Connect containerd service" Nov 23 23:00:17.419701 containerd[1555]: time="2025-11-23T23:00:17.419663640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 23:00:17.421015 containerd[1555]: time="2025-11-23T23:00:17.420991360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:00:17.580401 containerd[1555]: time="2025-11-23T23:00:17.580206480Z" level=info msg="Start subscribing containerd event" Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580517080Z" level=info msg="Start recovering state" Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580625280Z" level=info msg="Start event monitor" Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580641240Z" level=info msg="Start cni network conf syncer for default" Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580648400Z" level=info msg="Start streaming server" Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580657000Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580663240Z" level=info msg="runtime interface starting up..." Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580669120Z" level=info msg="starting plugins..." Nov 23 23:00:17.581230 containerd[1555]: time="2025-11-23T23:00:17.580683240Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 23:00:17.583208 containerd[1555]: time="2025-11-23T23:00:17.583012680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 23:00:17.583208 containerd[1555]: time="2025-11-23T23:00:17.583097520Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 23:00:17.583932 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 23:00:17.585565 containerd[1555]: time="2025-11-23T23:00:17.585170440Z" level=info msg="containerd successfully booted in 0.213400s" Nov 23 23:00:17.650313 tar[1529]: linux-arm64/README.md Nov 23 23:00:17.675521 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 23:00:17.788451 systemd-networkd[1408]: eth1: Gained IPv6LL Nov 23 23:00:17.789462 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Nov 23 23:00:17.795903 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 23:00:17.798382 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 23:00:17.803510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:17.809676 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 23:00:17.861534 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 23:00:18.617475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:18.627960 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:18.684487 systemd-networkd[1408]: eth0: Gained IPv6LL Nov 23 23:00:18.685302 systemd-timesyncd[1426]: Network configuration changed, trying to establish connection. Nov 23 23:00:19.020571 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 23:00:19.049309 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 23:00:19.054680 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 23:00:19.078847 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 23:00:19.079802 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 23:00:19.084592 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 23:00:19.110380 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 23:00:19.116464 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 23:00:19.118890 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 23:00:19.119762 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 23:00:19.120887 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 23:00:19.126422 systemd[1]: Startup finished in 2.386s (kernel) + 5.432s (initrd) + 5.133s (userspace) = 12.953s. Nov 23 23:00:19.136836 kubelet[1640]: E1123 23:00:19.136784 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:19.143584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:19.143736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:19.144223 systemd[1]: kubelet.service: Consumed 854ms CPU time, 253.9M memory peak. Nov 23 23:00:29.394442 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 23:00:29.396503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:29.580583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:29.593411 (kubelet)[1677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:29.649285 kubelet[1677]: E1123 23:00:29.649161 1677 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:29.653940 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:29.654342 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:29.655006 systemd[1]: kubelet.service: Consumed 184ms CPU time, 106M memory peak. Nov 23 23:00:39.819373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 23:00:39.823216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:39.984976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:39.997224 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:40.044758 kubelet[1691]: E1123 23:00:40.044691 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:40.047212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:40.047366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:40.047934 systemd[1]: kubelet.service: Consumed 172ms CPU time, 107.2M memory peak. Nov 23 23:00:45.961811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 23:00:45.964404 systemd[1]: Started sshd@0-49.12.4.178:22-139.178.68.195:60264.service - OpenSSH per-connection server daemon (139.178.68.195:60264). Nov 23 23:00:46.949872 sshd[1699]: Accepted publickey for core from 139.178.68.195 port 60264 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:46.954711 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:46.964997 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 23:00:46.966548 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 23:00:46.977110 systemd-logind[1519]: New session 1 of user core. Nov 23 23:00:46.991634 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 23:00:46.995498 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 23:00:47.008635 (systemd)[1704]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 23:00:47.012382 systemd-logind[1519]: New session c1 of user core. Nov 23 23:00:47.153952 systemd[1704]: Queued start job for default target default.target. Nov 23 23:00:47.166768 systemd[1704]: Created slice app.slice - User Application Slice. Nov 23 23:00:47.166815 systemd[1704]: Reached target paths.target - Paths. Nov 23 23:00:47.166869 systemd[1704]: Reached target timers.target - Timers. Nov 23 23:00:47.168774 systemd[1704]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 23:00:47.183099 systemd[1704]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 23:00:47.183411 systemd[1704]: Reached target sockets.target - Sockets. Nov 23 23:00:47.183475 systemd[1704]: Reached target basic.target - Basic System. Nov 23 23:00:47.183506 systemd[1704]: Reached target default.target - Main User Target. Nov 23 23:00:47.183535 systemd[1704]: Startup finished in 163ms. Nov 23 23:00:47.183650 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 23:00:47.194648 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 23:00:47.880543 systemd[1]: Started sshd@1-49.12.4.178:22-139.178.68.195:60276.service - OpenSSH per-connection server daemon (139.178.68.195:60276). Nov 23 23:00:48.873368 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 60276 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:48.875182 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:48.880467 systemd-logind[1519]: New session 2 of user core. Nov 23 23:00:48.888593 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 23:00:49.413522 systemd-timesyncd[1426]: Contacted time server 91.98.67.74:123 (2.flatcar.pool.ntp.org). Nov 23 23:00:49.413722 systemd-timesyncd[1426]: Initial clock synchronization to Sun 2025-11-23 23:00:49.413322 UTC. Nov 23 23:00:49.414197 systemd-resolved[1409]: Clock change detected. Flushing caches. Nov 23 23:00:50.024416 sshd[1718]: Connection closed by 139.178.68.195 port 60276 Nov 23 23:00:50.025537 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:50.030096 systemd[1]: sshd@1-49.12.4.178:22-139.178.68.195:60276.service: Deactivated successfully. Nov 23 23:00:50.031921 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 23:00:50.033046 systemd-logind[1519]: Session 2 logged out. Waiting for processes to exit. Nov 23 23:00:50.035175 systemd-logind[1519]: Removed session 2. Nov 23 23:00:50.197784 systemd[1]: Started sshd@2-49.12.4.178:22-139.178.68.195:60284.service - OpenSSH per-connection server daemon (139.178.68.195:60284). Nov 23 23:00:50.541381 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 23:00:50.543471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:00:50.701199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:00:50.711994 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:00:50.764581 kubelet[1735]: E1123 23:00:50.764514 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:00:50.767466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:00:50.767792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:00:50.768510 systemd[1]: kubelet.service: Consumed 174ms CPU time, 105.6M memory peak. Nov 23 23:00:51.173417 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 60284 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:51.175398 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:51.182387 systemd-logind[1519]: New session 3 of user core. Nov 23 23:00:51.194953 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 23:00:51.839430 sshd[1742]: Connection closed by 139.178.68.195 port 60284 Nov 23 23:00:51.841254 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:51.847104 systemd[1]: sshd@2-49.12.4.178:22-139.178.68.195:60284.service: Deactivated successfully. Nov 23 23:00:51.847428 systemd-logind[1519]: Session 3 logged out. Waiting for processes to exit. Nov 23 23:00:51.849922 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 23:00:51.855102 systemd-logind[1519]: Removed session 3. Nov 23 23:00:52.023153 systemd[1]: Started sshd@3-49.12.4.178:22-139.178.68.195:57882.service - OpenSSH per-connection server daemon (139.178.68.195:57882). Nov 23 23:00:53.001088 sshd[1748]: Accepted publickey for core from 139.178.68.195 port 57882 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:53.003157 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:53.012437 systemd-logind[1519]: New session 4 of user core. Nov 23 23:00:53.019588 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 23:00:53.671985 sshd[1751]: Connection closed by 139.178.68.195 port 57882 Nov 23 23:00:53.671780 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:53.679628 systemd[1]: sshd@3-49.12.4.178:22-139.178.68.195:57882.service: Deactivated successfully. Nov 23 23:00:53.683053 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 23:00:53.684268 systemd-logind[1519]: Session 4 logged out. Waiting for processes to exit. Nov 23 23:00:53.686110 systemd-logind[1519]: Removed session 4. Nov 23 23:00:53.849402 systemd[1]: Started sshd@4-49.12.4.178:22-139.178.68.195:57892.service - OpenSSH per-connection server daemon (139.178.68.195:57892). Nov 23 23:00:54.821516 sshd[1757]: Accepted publickey for core from 139.178.68.195 port 57892 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:54.823435 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:54.831592 systemd-logind[1519]: New session 5 of user core. Nov 23 23:00:54.839636 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 23:00:55.341409 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 23:00:55.342245 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:55.360124 sudo[1761]: pam_unix(sudo:session): session closed for user root Nov 23 23:00:55.515765 sshd[1760]: Connection closed by 139.178.68.195 port 57892 Nov 23 23:00:55.517479 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:55.523944 systemd[1]: sshd@4-49.12.4.178:22-139.178.68.195:57892.service: Deactivated successfully. Nov 23 23:00:55.526542 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 23:00:55.527713 systemd-logind[1519]: Session 5 logged out. Waiting for processes to exit. Nov 23 23:00:55.529471 systemd-logind[1519]: Removed session 5. Nov 23 23:00:55.688764 systemd[1]: Started sshd@5-49.12.4.178:22-139.178.68.195:57900.service - OpenSSH per-connection server daemon (139.178.68.195:57900). Nov 23 23:00:56.688625 sshd[1767]: Accepted publickey for core from 139.178.68.195 port 57900 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:56.691028 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:56.696210 systemd-logind[1519]: New session 6 of user core. Nov 23 23:00:56.703811 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 23:00:57.208353 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 23:00:57.208685 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:57.214477 sudo[1772]: pam_unix(sudo:session): session closed for user root Nov 23 23:00:57.222108 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 23:00:57.222831 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:57.238251 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 23:00:57.284257 augenrules[1794]: No rules Nov 23 23:00:57.286073 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 23:00:57.287401 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 23:00:57.290614 sudo[1771]: pam_unix(sudo:session): session closed for user root Nov 23 23:00:57.448549 sshd[1770]: Connection closed by 139.178.68.195 port 57900 Nov 23 23:00:57.449596 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Nov 23 23:00:57.454275 systemd-logind[1519]: Session 6 logged out. Waiting for processes to exit. Nov 23 23:00:57.456456 systemd[1]: sshd@5-49.12.4.178:22-139.178.68.195:57900.service: Deactivated successfully. Nov 23 23:00:57.458867 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 23:00:57.461460 systemd-logind[1519]: Removed session 6. Nov 23 23:00:57.618960 systemd[1]: Started sshd@6-49.12.4.178:22-139.178.68.195:57906.service - OpenSSH per-connection server daemon (139.178.68.195:57906). Nov 23 23:00:58.605746 sshd[1803]: Accepted publickey for core from 139.178.68.195 port 57906 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:00:58.608397 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:00:58.616155 systemd-logind[1519]: New session 7 of user core. Nov 23 23:00:58.624954 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 23:00:59.117920 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 23:00:59.118190 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 23:00:59.467206 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 23:00:59.478981 (dockerd)[1824]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 23:00:59.714915 dockerd[1824]: time="2025-11-23T23:00:59.714652574Z" level=info msg="Starting up" Nov 23 23:00:59.721324 dockerd[1824]: time="2025-11-23T23:00:59.720043294Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 23:00:59.734105 dockerd[1824]: time="2025-11-23T23:00:59.734033934Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 23:00:59.753585 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1067096546-merged.mount: Deactivated successfully. Nov 23 23:00:59.766048 systemd[1]: var-lib-docker-metacopy\x2dcheck2683919422-merged.mount: Deactivated successfully. Nov 23 23:00:59.775515 dockerd[1824]: time="2025-11-23T23:00:59.775469974Z" level=info msg="Loading containers: start." Nov 23 23:00:59.786355 kernel: Initializing XFRM netlink socket Nov 23 23:01:00.059141 systemd-networkd[1408]: docker0: Link UP Nov 23 23:01:00.063740 dockerd[1824]: time="2025-11-23T23:01:00.063084414Z" level=info msg="Loading containers: done." Nov 23 23:01:00.085718 dockerd[1824]: time="2025-11-23T23:01:00.085671534Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 23:01:00.085988 dockerd[1824]: time="2025-11-23T23:01:00.085968294Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 23:01:00.086139 dockerd[1824]: time="2025-11-23T23:01:00.086122574Z" level=info msg="Initializing buildkit" Nov 23 23:01:00.111232 dockerd[1824]: time="2025-11-23T23:01:00.111166814Z" level=info msg="Completed buildkit initialization" Nov 23 23:01:00.120326 dockerd[1824]: time="2025-11-23T23:01:00.119948574Z" level=info msg="Daemon has completed initialization" Nov 23 23:01:00.120326 dockerd[1824]: time="2025-11-23T23:01:00.120050454Z" level=info msg="API listen on /run/docker.sock" Nov 23 23:01:00.122781 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 23:01:00.750016 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1041064056-merged.mount: Deactivated successfully. Nov 23 23:01:00.791890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 23 23:01:00.794258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:00.975591 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:00.991918 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:01:01.047168 kubelet[2044]: E1123 23:01:01.047086 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:01:01.050096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:01:01.050229 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:01:01.050833 systemd[1]: kubelet.service: Consumed 187ms CPU time, 107.1M memory peak. Nov 23 23:01:01.183179 containerd[1555]: time="2025-11-23T23:01:01.183107174Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 23:01:01.833807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076596893.mount: Deactivated successfully. Nov 23 23:01:02.647559 update_engine[1521]: I20251123 23:01:02.647444 1521 update_attempter.cc:509] Updating boot flags... Nov 23 23:01:03.223320 containerd[1555]: time="2025-11-23T23:01:03.222647414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:03.225194 containerd[1555]: time="2025-11-23T23:01:03.225155294Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26432057" Nov 23 23:01:03.226868 containerd[1555]: time="2025-11-23T23:01:03.226828654Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:03.232081 containerd[1555]: time="2025-11-23T23:01:03.232012374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:03.233710 containerd[1555]: time="2025-11-23T23:01:03.233508454Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 2.05033336s" Nov 23 23:01:03.233710 containerd[1555]: time="2025-11-23T23:01:03.233556694Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 23:01:03.235119 containerd[1555]: time="2025-11-23T23:01:03.235080334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 23:01:04.467314 containerd[1555]: time="2025-11-23T23:01:04.467218414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:04.469056 containerd[1555]: time="2025-11-23T23:01:04.468938534Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618975" Nov 23 23:01:04.470324 containerd[1555]: time="2025-11-23T23:01:04.470106734Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:04.474313 containerd[1555]: time="2025-11-23T23:01:04.473274134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:04.475619 containerd[1555]: time="2025-11-23T23:01:04.475562494Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.24043004s" Nov 23 23:01:04.475783 containerd[1555]: time="2025-11-23T23:01:04.475754694Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 23:01:04.478015 containerd[1555]: time="2025-11-23T23:01:04.477964894Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 23:01:05.501371 containerd[1555]: time="2025-11-23T23:01:05.501276854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:05.503145 containerd[1555]: time="2025-11-23T23:01:05.503092894Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618456" Nov 23 23:01:05.504816 containerd[1555]: time="2025-11-23T23:01:05.504247894Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:05.507546 containerd[1555]: time="2025-11-23T23:01:05.507505854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:05.508752 containerd[1555]: time="2025-11-23T23:01:05.508715814Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 1.03056216s" Nov 23 23:01:05.508888 containerd[1555]: time="2025-11-23T23:01:05.508872374Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 23:01:05.509863 containerd[1555]: time="2025-11-23T23:01:05.509819294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 23:01:06.460922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1775152791.mount: Deactivated successfully. Nov 23 23:01:06.811254 containerd[1555]: time="2025-11-23T23:01:06.809777694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:06.811254 containerd[1555]: time="2025-11-23T23:01:06.811192054Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561825" Nov 23 23:01:06.812400 containerd[1555]: time="2025-11-23T23:01:06.812336854Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:06.817632 containerd[1555]: time="2025-11-23T23:01:06.817571414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:06.818173 containerd[1555]: time="2025-11-23T23:01:06.818129694Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.30827484s" Nov 23 23:01:06.818173 containerd[1555]: time="2025-11-23T23:01:06.818169894Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 23:01:06.818673 containerd[1555]: time="2025-11-23T23:01:06.818641774Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 23:01:07.411398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2788415166.mount: Deactivated successfully. Nov 23 23:01:08.194379 containerd[1555]: time="2025-11-23T23:01:08.194327374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:08.196260 containerd[1555]: time="2025-11-23T23:01:08.196019414Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Nov 23 23:01:08.197793 containerd[1555]: time="2025-11-23T23:01:08.197739574Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:08.201625 containerd[1555]: time="2025-11-23T23:01:08.201538054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:08.202943 containerd[1555]: time="2025-11-23T23:01:08.202813014Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.38355056s" Nov 23 23:01:08.202943 containerd[1555]: time="2025-11-23T23:01:08.202851374Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 23:01:08.203721 containerd[1555]: time="2025-11-23T23:01:08.203677614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 23:01:08.748141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4036983876.mount: Deactivated successfully. Nov 23 23:01:08.759091 containerd[1555]: time="2025-11-23T23:01:08.758995854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:01:08.761041 containerd[1555]: time="2025-11-23T23:01:08.760572974Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 23 23:01:08.762056 containerd[1555]: time="2025-11-23T23:01:08.761998534Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:01:08.764265 containerd[1555]: time="2025-11-23T23:01:08.764216334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 23:01:08.765359 containerd[1555]: time="2025-11-23T23:01:08.765319294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 561.59728ms" Nov 23 23:01:08.765503 containerd[1555]: time="2025-11-23T23:01:08.765483134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 23:01:08.766224 containerd[1555]: time="2025-11-23T23:01:08.766127574Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 23:01:09.314984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2105745527.mount: Deactivated successfully. Nov 23 23:01:11.059809 containerd[1555]: time="2025-11-23T23:01:11.059736454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:11.062325 containerd[1555]: time="2025-11-23T23:01:11.062250974Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Nov 23 23:01:11.062707 containerd[1555]: time="2025-11-23T23:01:11.062656534Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:11.066867 containerd[1555]: time="2025-11-23T23:01:11.066657534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:11.068406 containerd[1555]: time="2025-11-23T23:01:11.068367574Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.30220608s" Nov 23 23:01:11.068550 containerd[1555]: time="2025-11-23T23:01:11.068530934Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 23:01:11.291970 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 23 23:01:11.296275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:11.466951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:11.481073 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 23:01:11.534034 kubelet[2271]: E1123 23:01:11.533952 2271 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 23:01:11.537281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 23:01:11.537448 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 23:01:11.538057 systemd[1]: kubelet.service: Consumed 174ms CPU time, 107.2M memory peak. Nov 23 23:01:16.137030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.137982 systemd[1]: kubelet.service: Consumed 174ms CPU time, 107.2M memory peak. Nov 23 23:01:16.140282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:16.173199 systemd[1]: Reload requested from client PID 2299 ('systemctl') (unit session-7.scope)... Nov 23 23:01:16.173338 systemd[1]: Reloading... Nov 23 23:01:16.312347 zram_generator::config[2346]: No configuration found. Nov 23 23:01:16.509441 systemd[1]: Reloading finished in 335 ms. Nov 23 23:01:16.559114 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 23:01:16.559260 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 23:01:16.559869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.559959 systemd[1]: kubelet.service: Consumed 117ms CPU time, 95M memory peak. Nov 23 23:01:16.562898 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:16.713698 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:16.728924 (kubelet)[2391]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:01:16.775669 kubelet[2391]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:16.775669 kubelet[2391]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:01:16.775669 kubelet[2391]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:16.775669 kubelet[2391]: I1123 23:01:16.775378 2391 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:01:17.575544 kubelet[2391]: I1123 23:01:17.575484 2391 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:01:17.575544 kubelet[2391]: I1123 23:01:17.575552 2391 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:01:17.575961 kubelet[2391]: I1123 23:01:17.575941 2391 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:01:17.608361 kubelet[2391]: E1123 23:01:17.608223 2391 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.12.4.178:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:17.616664 kubelet[2391]: I1123 23:01:17.616600 2391 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:01:17.624150 kubelet[2391]: I1123 23:01:17.624102 2391 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:01:17.628491 kubelet[2391]: I1123 23:01:17.628467 2391 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:01:17.630061 kubelet[2391]: I1123 23:01:17.629960 2391 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:01:17.630501 kubelet[2391]: I1123 23:01:17.630057 2391 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-1-d-6a40a07c08","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:01:17.630670 kubelet[2391]: I1123 23:01:17.630632 2391 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:01:17.630670 kubelet[2391]: I1123 23:01:17.630663 2391 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:01:17.630992 kubelet[2391]: I1123 23:01:17.630952 2391 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:17.635172 kubelet[2391]: I1123 23:01:17.634951 2391 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:01:17.635172 kubelet[2391]: I1123 23:01:17.634988 2391 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:01:17.635172 kubelet[2391]: I1123 23:01:17.635018 2391 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:01:17.635172 kubelet[2391]: I1123 23:01:17.635030 2391 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:01:17.640336 kubelet[2391]: I1123 23:01:17.639157 2391 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:01:17.640336 kubelet[2391]: I1123 23:01:17.639989 2391 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:01:17.640336 kubelet[2391]: W1123 23:01:17.640127 2391 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 23:01:17.641480 kubelet[2391]: I1123 23:01:17.641457 2391 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:01:17.641649 kubelet[2391]: I1123 23:01:17.641637 2391 server.go:1287] "Started kubelet" Nov 23 23:01:17.641908 kubelet[2391]: W1123 23:01:17.641862 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.12.4.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-1-d-6a40a07c08&limit=500&resourceVersion=0": dial tcp 49.12.4.178:6443: connect: connection refused Nov 23 23:01:17.642019 kubelet[2391]: E1123 23:01:17.641997 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.12.4.178:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-1-d-6a40a07c08&limit=500&resourceVersion=0\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:17.645286 kubelet[2391]: I1123 23:01:17.645253 2391 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:01:17.649048 kubelet[2391]: W1123 23:01:17.648955 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.12.4.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.12.4.178:6443: connect: connection refused Nov 23 23:01:17.649048 kubelet[2391]: E1123 23:01:17.649051 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.12.4.178:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:17.653660 kubelet[2391]: I1123 23:01:17.653592 2391 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:01:17.654653 kubelet[2391]: I1123 23:01:17.654486 2391 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:01:17.655545 kubelet[2391]: I1123 23:01:17.655489 2391 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:01:17.656818 kubelet[2391]: E1123 23:01:17.656775 2391 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" Nov 23 23:01:17.658857 kubelet[2391]: I1123 23:01:17.658150 2391 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:01:17.659708 kubelet[2391]: I1123 23:01:17.659669 2391 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:01:17.661220 kubelet[2391]: I1123 23:01:17.661184 2391 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:01:17.661567 kubelet[2391]: I1123 23:01:17.661537 2391 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:01:17.663791 kubelet[2391]: I1123 23:01:17.663649 2391 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:01:17.667953 kubelet[2391]: E1123 23:01:17.667647 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.4.178:6443/api/v1/namespaces/default/events\": dial tcp 49.12.4.178:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-1-d-6a40a07c08.187ac5103f544a0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-1-d-6a40a07c08,UID:ci-4459-2-1-d-6a40a07c08,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-1-d-6a40a07c08,},FirstTimestamp:2025-11-23 23:01:17.641607694 +0000 UTC m=+0.905757441,LastTimestamp:2025-11-23 23:01:17.641607694 +0000 UTC m=+0.905757441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-1-d-6a40a07c08,}" Nov 23 23:01:17.668329 kubelet[2391]: W1123 23:01:17.668033 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.4.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.4.178:6443: connect: connection refused Nov 23 23:01:17.668329 kubelet[2391]: E1123 23:01:17.668082 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.12.4.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:17.668329 kubelet[2391]: E1123 23:01:17.668150 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.4.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-1-d-6a40a07c08?timeout=10s\": dial tcp 49.12.4.178:6443: connect: connection refused" interval="200ms" Nov 23 23:01:17.668329 kubelet[2391]: E1123 23:01:17.668228 2391 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:01:17.669818 kubelet[2391]: I1123 23:01:17.669789 2391 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:01:17.669818 kubelet[2391]: I1123 23:01:17.669811 2391 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:01:17.669929 kubelet[2391]: I1123 23:01:17.669909 2391 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:01:17.688878 kubelet[2391]: I1123 23:01:17.688752 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:01:17.692832 kubelet[2391]: I1123 23:01:17.692426 2391 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:01:17.692832 kubelet[2391]: I1123 23:01:17.692457 2391 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:01:17.692832 kubelet[2391]: I1123 23:01:17.692478 2391 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:01:17.692832 kubelet[2391]: I1123 23:01:17.692485 2391 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:01:17.692832 kubelet[2391]: E1123 23:01:17.692570 2391 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:01:17.696429 kubelet[2391]: W1123 23:01:17.696378 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.4.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.4.178:6443: connect: connection refused Nov 23 23:01:17.696559 kubelet[2391]: E1123 23:01:17.696439 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.12.4.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:17.700569 kubelet[2391]: I1123 23:01:17.700516 2391 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:01:17.700740 kubelet[2391]: I1123 23:01:17.700727 2391 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:01:17.700814 kubelet[2391]: I1123 23:01:17.700806 2391 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:17.704048 kubelet[2391]: I1123 23:01:17.704015 2391 policy_none.go:49] "None policy: Start" Nov 23 23:01:17.704201 kubelet[2391]: I1123 23:01:17.704191 2391 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:01:17.704269 kubelet[2391]: I1123 23:01:17.704262 2391 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:01:17.711083 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 23:01:17.728325 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 23:01:17.735717 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 23:01:17.744324 kubelet[2391]: I1123 23:01:17.744071 2391 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:01:17.744596 kubelet[2391]: I1123 23:01:17.744574 2391 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:01:17.744875 kubelet[2391]: I1123 23:01:17.744823 2391 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:01:17.746901 kubelet[2391]: I1123 23:01:17.746183 2391 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:01:17.749652 kubelet[2391]: E1123 23:01:17.749407 2391 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:01:17.749652 kubelet[2391]: E1123 23:01:17.749492 2391 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-1-d-6a40a07c08\" not found" Nov 23 23:01:17.806875 systemd[1]: Created slice kubepods-burstable-pod445131c16ed70449727193d47e83fee7.slice - libcontainer container kubepods-burstable-pod445131c16ed70449727193d47e83fee7.slice. Nov 23 23:01:17.827595 kubelet[2391]: E1123 23:01:17.826768 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.834028 systemd[1]: Created slice kubepods-burstable-pod29e0901e1136f9c6b0909f252659338a.slice - libcontainer container kubepods-burstable-pod29e0901e1136f9c6b0909f252659338a.slice. Nov 23 23:01:17.849783 kubelet[2391]: I1123 23:01:17.849662 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.850656 kubelet[2391]: E1123 23:01:17.849683 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.851450 kubelet[2391]: E1123 23:01:17.851414 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.4.178:6443/api/v1/nodes\": dial tcp 49.12.4.178:6443: connect: connection refused" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.853247 systemd[1]: Created slice kubepods-burstable-pode09b7f9ba382a60a1807d0efb75b07bf.slice - libcontainer container kubepods-burstable-pode09b7f9ba382a60a1807d0efb75b07bf.slice. Nov 23 23:01:17.855839 kubelet[2391]: E1123 23:01:17.855799 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863218 kubelet[2391]: I1123 23:01:17.863144 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863218 kubelet[2391]: I1123 23:01:17.863207 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863386 kubelet[2391]: I1123 23:01:17.863244 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29e0901e1136f9c6b0909f252659338a-kubeconfig\") pod \"kube-scheduler-ci-4459-2-1-d-6a40a07c08\" (UID: \"29e0901e1136f9c6b0909f252659338a\") " pod="kube-system/kube-scheduler-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863386 kubelet[2391]: I1123 23:01:17.863268 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e09b7f9ba382a60a1807d0efb75b07bf-ca-certs\") pod \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" (UID: \"e09b7f9ba382a60a1807d0efb75b07bf\") " pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863386 kubelet[2391]: I1123 23:01:17.863302 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e09b7f9ba382a60a1807d0efb75b07bf-k8s-certs\") pod \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" (UID: \"e09b7f9ba382a60a1807d0efb75b07bf\") " pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863386 kubelet[2391]: I1123 23:01:17.863323 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-ca-certs\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863386 kubelet[2391]: I1123 23:01:17.863347 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863602 kubelet[2391]: I1123 23:01:17.863394 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.863602 kubelet[2391]: I1123 23:01:17.863415 2391 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e09b7f9ba382a60a1807d0efb75b07bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" (UID: \"e09b7f9ba382a60a1807d0efb75b07bf\") " pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:17.869995 kubelet[2391]: E1123 23:01:17.869917 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.4.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-1-d-6a40a07c08?timeout=10s\": dial tcp 49.12.4.178:6443: connect: connection refused" interval="400ms" Nov 23 23:01:18.055566 kubelet[2391]: I1123 23:01:18.055488 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:18.056093 kubelet[2391]: E1123 23:01:18.056023 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.4.178:6443/api/v1/nodes\": dial tcp 49.12.4.178:6443: connect: connection refused" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:18.130680 containerd[1555]: time="2025-11-23T23:01:18.129258574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-1-d-6a40a07c08,Uid:445131c16ed70449727193d47e83fee7,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:18.151919 containerd[1555]: time="2025-11-23T23:01:18.151870654Z" level=info msg="connecting to shim 904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398" address="unix:///run/containerd/s/0234dcd8f7c6f8635f2ae80b9ff06e6daf7dc3e22f1cd4ea25ba0526fb2600b9" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:18.152759 containerd[1555]: time="2025-11-23T23:01:18.152731454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-1-d-6a40a07c08,Uid:29e0901e1136f9c6b0909f252659338a,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:18.157394 containerd[1555]: time="2025-11-23T23:01:18.157351974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-1-d-6a40a07c08,Uid:e09b7f9ba382a60a1807d0efb75b07bf,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:18.198569 systemd[1]: Started cri-containerd-904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398.scope - libcontainer container 904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398. Nov 23 23:01:18.205193 containerd[1555]: time="2025-11-23T23:01:18.204087374Z" level=info msg="connecting to shim 24210ce289ab4463390162ecd1d67c6a0c2535b3a1baa993ddfd2f2ec96c44f8" address="unix:///run/containerd/s/e2e8beb10ffd1351a6eddba3698a8bd3634cfb8264571c5835f687ff9bed6688" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:18.211039 containerd[1555]: time="2025-11-23T23:01:18.210567134Z" level=info msg="connecting to shim d7b7d8007bcf031be44fce93594c371fd33840452a9d8eb3295af76ba70ed020" address="unix:///run/containerd/s/39ac371eb019b58da405be2b33712972a519f4431fb5ca0e6dffa1be0b3b3c07" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:18.241601 systemd[1]: Started cri-containerd-24210ce289ab4463390162ecd1d67c6a0c2535b3a1baa993ddfd2f2ec96c44f8.scope - libcontainer container 24210ce289ab4463390162ecd1d67c6a0c2535b3a1baa993ddfd2f2ec96c44f8. Nov 23 23:01:18.263731 systemd[1]: Started cri-containerd-d7b7d8007bcf031be44fce93594c371fd33840452a9d8eb3295af76ba70ed020.scope - libcontainer container d7b7d8007bcf031be44fce93594c371fd33840452a9d8eb3295af76ba70ed020. Nov 23 23:01:18.271986 kubelet[2391]: E1123 23:01:18.271884 2391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.12.4.178:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-1-d-6a40a07c08?timeout=10s\": dial tcp 49.12.4.178:6443: connect: connection refused" interval="800ms" Nov 23 23:01:18.289680 containerd[1555]: time="2025-11-23T23:01:18.289501294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-1-d-6a40a07c08,Uid:445131c16ed70449727193d47e83fee7,Namespace:kube-system,Attempt:0,} returns sandbox id \"904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398\"" Nov 23 23:01:18.294742 containerd[1555]: time="2025-11-23T23:01:18.294705654Z" level=info msg="CreateContainer within sandbox \"904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 23:01:18.321769 containerd[1555]: time="2025-11-23T23:01:18.319606654Z" level=info msg="Container 6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:18.327954 containerd[1555]: time="2025-11-23T23:01:18.327914694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-1-d-6a40a07c08,Uid:e09b7f9ba382a60a1807d0efb75b07bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"24210ce289ab4463390162ecd1d67c6a0c2535b3a1baa993ddfd2f2ec96c44f8\"" Nov 23 23:01:18.335446 containerd[1555]: time="2025-11-23T23:01:18.335409014Z" level=info msg="CreateContainer within sandbox \"904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892\"" Nov 23 23:01:18.336176 containerd[1555]: time="2025-11-23T23:01:18.336147254Z" level=info msg="StartContainer for \"6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892\"" Nov 23 23:01:18.336982 containerd[1555]: time="2025-11-23T23:01:18.336941654Z" level=info msg="CreateContainer within sandbox \"24210ce289ab4463390162ecd1d67c6a0c2535b3a1baa993ddfd2f2ec96c44f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 23:01:18.337860 containerd[1555]: time="2025-11-23T23:01:18.337824694Z" level=info msg="connecting to shim 6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892" address="unix:///run/containerd/s/0234dcd8f7c6f8635f2ae80b9ff06e6daf7dc3e22f1cd4ea25ba0526fb2600b9" protocol=ttrpc version=3 Nov 23 23:01:18.357862 containerd[1555]: time="2025-11-23T23:01:18.357023414Z" level=info msg="Container 071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:18.362793 systemd[1]: Started cri-containerd-6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892.scope - libcontainer container 6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892. Nov 23 23:01:18.372108 containerd[1555]: time="2025-11-23T23:01:18.371437174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-1-d-6a40a07c08,Uid:29e0901e1136f9c6b0909f252659338a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7b7d8007bcf031be44fce93594c371fd33840452a9d8eb3295af76ba70ed020\"" Nov 23 23:01:18.375834 containerd[1555]: time="2025-11-23T23:01:18.375780334Z" level=info msg="CreateContainer within sandbox \"d7b7d8007bcf031be44fce93594c371fd33840452a9d8eb3295af76ba70ed020\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 23:01:18.391032 containerd[1555]: time="2025-11-23T23:01:18.390004134Z" level=info msg="Container 5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:18.392557 containerd[1555]: time="2025-11-23T23:01:18.392475214Z" level=info msg="CreateContainer within sandbox \"24210ce289ab4463390162ecd1d67c6a0c2535b3a1baa993ddfd2f2ec96c44f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53\"" Nov 23 23:01:18.396477 containerd[1555]: time="2025-11-23T23:01:18.396436854Z" level=info msg="StartContainer for \"071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53\"" Nov 23 23:01:18.398841 containerd[1555]: time="2025-11-23T23:01:18.397789734Z" level=info msg="connecting to shim 071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53" address="unix:///run/containerd/s/e2e8beb10ffd1351a6eddba3698a8bd3634cfb8264571c5835f687ff9bed6688" protocol=ttrpc version=3 Nov 23 23:01:18.404243 containerd[1555]: time="2025-11-23T23:01:18.403649494Z" level=info msg="CreateContainer within sandbox \"d7b7d8007bcf031be44fce93594c371fd33840452a9d8eb3295af76ba70ed020\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90\"" Nov 23 23:01:18.405469 containerd[1555]: time="2025-11-23T23:01:18.405436414Z" level=info msg="StartContainer for \"5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90\"" Nov 23 23:01:18.407183 containerd[1555]: time="2025-11-23T23:01:18.407147534Z" level=info msg="connecting to shim 5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90" address="unix:///run/containerd/s/39ac371eb019b58da405be2b33712972a519f4431fb5ca0e6dffa1be0b3b3c07" protocol=ttrpc version=3 Nov 23 23:01:18.443758 containerd[1555]: time="2025-11-23T23:01:18.443706774Z" level=info msg="StartContainer for \"6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892\" returns successfully" Nov 23 23:01:18.445568 systemd[1]: Started cri-containerd-071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53.scope - libcontainer container 071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53. Nov 23 23:01:18.455266 kubelet[2391]: E1123 23:01:18.454957 2391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.12.4.178:6443/api/v1/namespaces/default/events\": dial tcp 49.12.4.178:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-1-d-6a40a07c08.187ac5103f544a0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-1-d-6a40a07c08,UID:ci-4459-2-1-d-6a40a07c08,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-1-d-6a40a07c08,},FirstTimestamp:2025-11-23 23:01:17.641607694 +0000 UTC m=+0.905757441,LastTimestamp:2025-11-23 23:01:17.641607694 +0000 UTC m=+0.905757441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-1-d-6a40a07c08,}" Nov 23 23:01:18.455581 systemd[1]: Started cri-containerd-5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90.scope - libcontainer container 5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90. Nov 23 23:01:18.464422 kubelet[2391]: I1123 23:01:18.464362 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:18.466652 kubelet[2391]: E1123 23:01:18.466587 2391 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.12.4.178:6443/api/v1/nodes\": dial tcp 49.12.4.178:6443: connect: connection refused" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:18.522439 containerd[1555]: time="2025-11-23T23:01:18.522389574Z" level=info msg="StartContainer for \"071b63682c7d6e10fc0ff4941a5c99469f8191fab36dcd170961df109cdc5f53\" returns successfully" Nov 23 23:01:18.530939 kubelet[2391]: W1123 23:01:18.529947 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.12.4.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.12.4.178:6443: connect: connection refused Nov 23 23:01:18.530939 kubelet[2391]: E1123 23:01:18.530026 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.12.4.178:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:18.530939 kubelet[2391]: W1123 23:01:18.530851 2391 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.12.4.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.12.4.178:6443: connect: connection refused Nov 23 23:01:18.530939 kubelet[2391]: E1123 23:01:18.530913 2391 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.12.4.178:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.12.4.178:6443: connect: connection refused" logger="UnhandledError" Nov 23 23:01:18.532895 containerd[1555]: time="2025-11-23T23:01:18.532428254Z" level=info msg="StartContainer for \"5dc74e46e6fd5ccbed8242e27c8bffd4971a5db55042bdfaaba538b97ca0fb90\" returns successfully" Nov 23 23:01:18.706607 kubelet[2391]: E1123 23:01:18.706395 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:18.710357 kubelet[2391]: E1123 23:01:18.709588 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:18.713464 kubelet[2391]: E1123 23:01:18.713440 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:19.269725 kubelet[2391]: I1123 23:01:19.269666 2391 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:19.715353 kubelet[2391]: E1123 23:01:19.715145 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:19.715353 kubelet[2391]: E1123 23:01:19.715203 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.716088 kubelet[2391]: E1123 23:01:20.715992 2391 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-1-d-6a40a07c08\" not found" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.775154 kubelet[2391]: I1123 23:01:20.775109 2391 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.862332 kubelet[2391]: I1123 23:01:20.862027 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.871537 kubelet[2391]: E1123 23:01:20.871473 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.871537 kubelet[2391]: I1123 23:01:20.871528 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.879262 kubelet[2391]: E1123 23:01:20.877110 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-1-d-6a40a07c08\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.879495 kubelet[2391]: I1123 23:01:20.879323 2391 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:20.883218 kubelet[2391]: E1123 23:01:20.883178 2391 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:21.650307 kubelet[2391]: I1123 23:01:21.649879 2391 apiserver.go:52] "Watching apiserver" Nov 23 23:01:21.662239 kubelet[2391]: I1123 23:01:21.661895 2391 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:01:23.181987 systemd[1]: Reload requested from client PID 2664 ('systemctl') (unit session-7.scope)... Nov 23 23:01:23.182013 systemd[1]: Reloading... Nov 23 23:01:23.291399 zram_generator::config[2708]: No configuration found. Nov 23 23:01:23.523218 systemd[1]: Reloading finished in 340 ms. Nov 23 23:01:23.549319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:23.560830 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 23:01:23.562361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:23.562445 systemd[1]: kubelet.service: Consumed 1.381s CPU time, 127.9M memory peak. Nov 23 23:01:23.566795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 23:01:23.723406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 23:01:23.733906 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 23:01:23.796129 kubelet[2752]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:23.796538 kubelet[2752]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 23:01:23.796587 kubelet[2752]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 23:01:23.796844 kubelet[2752]: I1123 23:01:23.796796 2752 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 23:01:23.805350 kubelet[2752]: I1123 23:01:23.805280 2752 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 23:01:23.805519 kubelet[2752]: I1123 23:01:23.805506 2752 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 23:01:23.805936 kubelet[2752]: I1123 23:01:23.805916 2752 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 23:01:23.807892 kubelet[2752]: I1123 23:01:23.807857 2752 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 23:01:23.811430 kubelet[2752]: I1123 23:01:23.811397 2752 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 23:01:23.816363 kubelet[2752]: I1123 23:01:23.815886 2752 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 23:01:23.818885 kubelet[2752]: I1123 23:01:23.818828 2752 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 23:01:23.820599 kubelet[2752]: I1123 23:01:23.820541 2752 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 23:01:23.820859 kubelet[2752]: I1123 23:01:23.820578 2752 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-1-d-6a40a07c08","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 23:01:23.820859 kubelet[2752]: I1123 23:01:23.820854 2752 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 23:01:23.820859 kubelet[2752]: I1123 23:01:23.820864 2752 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 23:01:23.821087 kubelet[2752]: I1123 23:01:23.820917 2752 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:23.821087 kubelet[2752]: I1123 23:01:23.821053 2752 kubelet.go:446] "Attempting to sync node with API server" Nov 23 23:01:23.821087 kubelet[2752]: I1123 23:01:23.821065 2752 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 23:01:23.821087 kubelet[2752]: I1123 23:01:23.821089 2752 kubelet.go:352] "Adding apiserver pod source" Nov 23 23:01:23.821265 kubelet[2752]: I1123 23:01:23.821109 2752 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 23:01:23.836950 kubelet[2752]: I1123 23:01:23.835415 2752 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 23:01:23.836950 kubelet[2752]: I1123 23:01:23.835960 2752 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 23:01:23.837428 kubelet[2752]: I1123 23:01:23.837412 2752 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 23:01:23.837605 kubelet[2752]: I1123 23:01:23.837525 2752 server.go:1287] "Started kubelet" Nov 23 23:01:23.841645 kubelet[2752]: I1123 23:01:23.841625 2752 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 23:01:23.851104 kubelet[2752]: I1123 23:01:23.851060 2752 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 23:01:23.852188 kubelet[2752]: I1123 23:01:23.852159 2752 server.go:479] "Adding debug handlers to kubelet server" Nov 23 23:01:23.854477 kubelet[2752]: I1123 23:01:23.852835 2752 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 23:01:23.854704 kubelet[2752]: I1123 23:01:23.854635 2752 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 23:01:23.854744 kubelet[2752]: I1123 23:01:23.853101 2752 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 23:01:23.854858 kubelet[2752]: I1123 23:01:23.854839 2752 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 23:01:23.857016 kubelet[2752]: I1123 23:01:23.856956 2752 factory.go:221] Registration of the systemd container factory successfully Nov 23 23:01:23.857317 kubelet[2752]: I1123 23:01:23.857273 2752 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 23:01:23.859997 kubelet[2752]: I1123 23:01:23.859970 2752 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 23:01:23.860467 kubelet[2752]: I1123 23:01:23.860362 2752 reconciler.go:26] "Reconciler: start to sync state" Nov 23 23:01:23.860467 kubelet[2752]: E1123 23:01:23.860359 2752 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 23:01:23.861485 kubelet[2752]: I1123 23:01:23.861452 2752 factory.go:221] Registration of the containerd container factory successfully Nov 23 23:01:23.864332 kubelet[2752]: I1123 23:01:23.864262 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 23:01:23.867983 kubelet[2752]: I1123 23:01:23.867088 2752 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 23:01:23.867983 kubelet[2752]: I1123 23:01:23.867122 2752 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 23:01:23.867983 kubelet[2752]: I1123 23:01:23.867141 2752 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 23:01:23.867983 kubelet[2752]: I1123 23:01:23.867148 2752 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 23:01:23.867983 kubelet[2752]: E1123 23:01:23.867187 2752 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 23:01:23.915725 kubelet[2752]: I1123 23:01:23.915649 2752 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 23:01:23.915725 kubelet[2752]: I1123 23:01:23.915717 2752 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 23:01:23.915874 kubelet[2752]: I1123 23:01:23.915742 2752 state_mem.go:36] "Initialized new in-memory state store" Nov 23 23:01:23.915941 kubelet[2752]: I1123 23:01:23.915924 2752 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 23:01:23.915969 kubelet[2752]: I1123 23:01:23.915940 2752 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 23:01:23.915969 kubelet[2752]: I1123 23:01:23.915959 2752 policy_none.go:49] "None policy: Start" Nov 23 23:01:23.916018 kubelet[2752]: I1123 23:01:23.915971 2752 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 23:01:23.916018 kubelet[2752]: I1123 23:01:23.915980 2752 state_mem.go:35] "Initializing new in-memory state store" Nov 23 23:01:23.916092 kubelet[2752]: I1123 23:01:23.916081 2752 state_mem.go:75] "Updated machine memory state" Nov 23 23:01:23.920751 kubelet[2752]: I1123 23:01:23.920367 2752 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 23:01:23.921435 kubelet[2752]: I1123 23:01:23.920985 2752 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 23:01:23.921435 kubelet[2752]: I1123 23:01:23.921003 2752 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 23:01:23.921547 kubelet[2752]: I1123 23:01:23.921493 2752 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 23:01:23.923970 kubelet[2752]: E1123 23:01:23.923780 2752 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 23:01:23.967925 kubelet[2752]: I1123 23:01:23.967873 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:23.970583 kubelet[2752]: I1123 23:01:23.970545 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:23.970955 kubelet[2752]: I1123 23:01:23.970930 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.033530 kubelet[2752]: I1123 23:01:24.033491 2752 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.044956 kubelet[2752]: I1123 23:01:24.044920 2752 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.045256 kubelet[2752]: I1123 23:01:24.045165 2752 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061760 kubelet[2752]: I1123 23:01:24.061607 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061760 kubelet[2752]: I1123 23:01:24.061655 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e09b7f9ba382a60a1807d0efb75b07bf-k8s-certs\") pod \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" (UID: \"e09b7f9ba382a60a1807d0efb75b07bf\") " pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061760 kubelet[2752]: I1123 23:01:24.061713 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-ca-certs\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061760 kubelet[2752]: I1123 23:01:24.061734 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061760 kubelet[2752]: I1123 23:01:24.061754 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061968 kubelet[2752]: I1123 23:01:24.061772 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/445131c16ed70449727193d47e83fee7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-1-d-6a40a07c08\" (UID: \"445131c16ed70449727193d47e83fee7\") " pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061968 kubelet[2752]: I1123 23:01:24.061789 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/29e0901e1136f9c6b0909f252659338a-kubeconfig\") pod \"kube-scheduler-ci-4459-2-1-d-6a40a07c08\" (UID: \"29e0901e1136f9c6b0909f252659338a\") " pod="kube-system/kube-scheduler-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061968 kubelet[2752]: I1123 23:01:24.061804 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e09b7f9ba382a60a1807d0efb75b07bf-ca-certs\") pod \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" (UID: \"e09b7f9ba382a60a1807d0efb75b07bf\") " pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.061968 kubelet[2752]: I1123 23:01:24.061826 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e09b7f9ba382a60a1807d0efb75b07bf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" (UID: \"e09b7f9ba382a60a1807d0efb75b07bf\") " pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.825204 kubelet[2752]: I1123 23:01:24.825139 2752 apiserver.go:52] "Watching apiserver" Nov 23 23:01:24.860713 kubelet[2752]: I1123 23:01:24.860652 2752 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 23:01:24.898432 kubelet[2752]: I1123 23:01:24.897648 2752 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.909773 kubelet[2752]: E1123 23:01:24.909670 2752 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-1-d-6a40a07c08\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" Nov 23 23:01:24.967514 kubelet[2752]: I1123 23:01:24.966238 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-1-d-6a40a07c08" podStartSLOduration=1.9662168150000001 podStartE2EDuration="1.966216815s" podCreationTimestamp="2025-11-23 23:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:24.944034549 +0000 UTC m=+1.202615944" watchObservedRunningTime="2025-11-23 23:01:24.966216815 +0000 UTC m=+1.224798130" Nov 23 23:01:24.980940 kubelet[2752]: I1123 23:01:24.980569 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-1-d-6a40a07c08" podStartSLOduration=1.98049407 podStartE2EDuration="1.98049407s" podCreationTimestamp="2025-11-23 23:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:24.966178172 +0000 UTC m=+1.224759527" watchObservedRunningTime="2025-11-23 23:01:24.98049407 +0000 UTC m=+1.239075425" Nov 23 23:01:24.981476 kubelet[2752]: I1123 23:01:24.981391 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" podStartSLOduration=1.9813363800000001 podStartE2EDuration="1.98133638s" podCreationTimestamp="2025-11-23 23:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:24.979578235 +0000 UTC m=+1.238159630" watchObservedRunningTime="2025-11-23 23:01:24.98133638 +0000 UTC m=+1.239917775" Nov 23 23:01:28.044457 kubelet[2752]: I1123 23:01:28.044408 2752 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 23:01:28.045344 containerd[1555]: time="2025-11-23T23:01:28.045102791Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 23:01:28.046000 kubelet[2752]: I1123 23:01:28.045800 2752 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 23:01:28.748056 systemd[1]: Created slice kubepods-besteffort-podabf2d4f9_bf2e_434b_b477_747aa5f02899.slice - libcontainer container kubepods-besteffort-podabf2d4f9_bf2e_434b_b477_747aa5f02899.slice. Nov 23 23:01:28.793704 kubelet[2752]: I1123 23:01:28.793636 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/abf2d4f9-bf2e-434b-b477-747aa5f02899-lib-modules\") pod \"kube-proxy-wpz2g\" (UID: \"abf2d4f9-bf2e-434b-b477-747aa5f02899\") " pod="kube-system/kube-proxy-wpz2g" Nov 23 23:01:28.793704 kubelet[2752]: I1123 23:01:28.793717 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/abf2d4f9-bf2e-434b-b477-747aa5f02899-kube-proxy\") pod \"kube-proxy-wpz2g\" (UID: \"abf2d4f9-bf2e-434b-b477-747aa5f02899\") " pod="kube-system/kube-proxy-wpz2g" Nov 23 23:01:28.794140 kubelet[2752]: I1123 23:01:28.793754 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/abf2d4f9-bf2e-434b-b477-747aa5f02899-xtables-lock\") pod \"kube-proxy-wpz2g\" (UID: \"abf2d4f9-bf2e-434b-b477-747aa5f02899\") " pod="kube-system/kube-proxy-wpz2g" Nov 23 23:01:28.794140 kubelet[2752]: I1123 23:01:28.793787 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl5km\" (UniqueName: \"kubernetes.io/projected/abf2d4f9-bf2e-434b-b477-747aa5f02899-kube-api-access-cl5km\") pod \"kube-proxy-wpz2g\" (UID: \"abf2d4f9-bf2e-434b-b477-747aa5f02899\") " pod="kube-system/kube-proxy-wpz2g" Nov 23 23:01:29.063477 containerd[1555]: time="2025-11-23T23:01:29.063426682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpz2g,Uid:abf2d4f9-bf2e-434b-b477-747aa5f02899,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:29.098331 containerd[1555]: time="2025-11-23T23:01:29.097830933Z" level=info msg="connecting to shim 8d756326e84bbef50e6b21cb879564c4d126629a8f2400e846a11d09bfab6134" address="unix:///run/containerd/s/1ae62da8e3a4c2e5cbcad047406c0e6f7c4557d48a92fe6c0306e2b94025b670" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:29.134871 systemd[1]: Started cri-containerd-8d756326e84bbef50e6b21cb879564c4d126629a8f2400e846a11d09bfab6134.scope - libcontainer container 8d756326e84bbef50e6b21cb879564c4d126629a8f2400e846a11d09bfab6134. Nov 23 23:01:29.196581 systemd[1]: Created slice kubepods-besteffort-podc74ef937_048a_449d_9130_2820d8d00180.slice - libcontainer container kubepods-besteffort-podc74ef937_048a_449d_9130_2820d8d00180.slice. Nov 23 23:01:29.197488 kubelet[2752]: I1123 23:01:29.197243 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxq5q\" (UniqueName: \"kubernetes.io/projected/c74ef937-048a-449d-9130-2820d8d00180-kube-api-access-bxq5q\") pod \"tigera-operator-7dcd859c48-2wmg2\" (UID: \"c74ef937-048a-449d-9130-2820d8d00180\") " pod="tigera-operator/tigera-operator-7dcd859c48-2wmg2" Nov 23 23:01:29.197488 kubelet[2752]: I1123 23:01:29.197324 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c74ef937-048a-449d-9130-2820d8d00180-var-lib-calico\") pod \"tigera-operator-7dcd859c48-2wmg2\" (UID: \"c74ef937-048a-449d-9130-2820d8d00180\") " pod="tigera-operator/tigera-operator-7dcd859c48-2wmg2" Nov 23 23:01:29.221817 containerd[1555]: time="2025-11-23T23:01:29.221731761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wpz2g,Uid:abf2d4f9-bf2e-434b-b477-747aa5f02899,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d756326e84bbef50e6b21cb879564c4d126629a8f2400e846a11d09bfab6134\"" Nov 23 23:01:29.227179 containerd[1555]: time="2025-11-23T23:01:29.227108362Z" level=info msg="CreateContainer within sandbox \"8d756326e84bbef50e6b21cb879564c4d126629a8f2400e846a11d09bfab6134\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 23:01:29.244621 containerd[1555]: time="2025-11-23T23:01:29.243799917Z" level=info msg="Container 010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:29.248893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987288131.mount: Deactivated successfully. Nov 23 23:01:29.253219 containerd[1555]: time="2025-11-23T23:01:29.253171396Z" level=info msg="CreateContainer within sandbox \"8d756326e84bbef50e6b21cb879564c4d126629a8f2400e846a11d09bfab6134\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce\"" Nov 23 23:01:29.254176 containerd[1555]: time="2025-11-23T23:01:29.254149654Z" level=info msg="StartContainer for \"010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce\"" Nov 23 23:01:29.257134 containerd[1555]: time="2025-11-23T23:01:29.256984023Z" level=info msg="connecting to shim 010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce" address="unix:///run/containerd/s/1ae62da8e3a4c2e5cbcad047406c0e6f7c4557d48a92fe6c0306e2b94025b670" protocol=ttrpc version=3 Nov 23 23:01:29.282648 systemd[1]: Started cri-containerd-010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce.scope - libcontainer container 010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce. Nov 23 23:01:29.367174 containerd[1555]: time="2025-11-23T23:01:29.366419348Z" level=info msg="StartContainer for \"010c81a2d714defd37a226195f50bb41848dcca7b27246cc71702d6711367cce\" returns successfully" Nov 23 23:01:29.501253 containerd[1555]: time="2025-11-23T23:01:29.501133581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2wmg2,Uid:c74ef937-048a-449d-9130-2820d8d00180,Namespace:tigera-operator,Attempt:0,}" Nov 23 23:01:29.524217 containerd[1555]: time="2025-11-23T23:01:29.523853856Z" level=info msg="connecting to shim deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186" address="unix:///run/containerd/s/82a637c727396d22a6d98a804ef97588cfef2e6615ca1ff32c3d50b525eb9fa6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:29.564586 systemd[1]: Started cri-containerd-deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186.scope - libcontainer container deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186. Nov 23 23:01:29.607243 containerd[1555]: time="2025-11-23T23:01:29.607176864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-2wmg2,Uid:c74ef937-048a-449d-9130-2820d8d00180,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186\"" Nov 23 23:01:29.610373 containerd[1555]: time="2025-11-23T23:01:29.610327372Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 23:01:29.938639 kubelet[2752]: I1123 23:01:29.938577 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wpz2g" podStartSLOduration=1.938557343 podStartE2EDuration="1.938557343s" podCreationTimestamp="2025-11-23 23:01:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:01:29.936642149 +0000 UTC m=+6.195223544" watchObservedRunningTime="2025-11-23 23:01:29.938557343 +0000 UTC m=+6.197138698" Nov 23 23:01:31.353916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323152836.mount: Deactivated successfully. Nov 23 23:01:31.808062 containerd[1555]: time="2025-11-23T23:01:31.807975420Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:31.811045 containerd[1555]: time="2025-11-23T23:01:31.810534074Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 23:01:31.812283 containerd[1555]: time="2025-11-23T23:01:31.812232843Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:31.815187 containerd[1555]: time="2025-11-23T23:01:31.815130755Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:31.815937 containerd[1555]: time="2025-11-23T23:01:31.815907836Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.205537861s" Nov 23 23:01:31.816038 containerd[1555]: time="2025-11-23T23:01:31.816021842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 23:01:31.820658 containerd[1555]: time="2025-11-23T23:01:31.820610082Z" level=info msg="CreateContainer within sandbox \"deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 23:01:31.830972 containerd[1555]: time="2025-11-23T23:01:31.830897501Z" level=info msg="Container d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:31.836163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1637060359.mount: Deactivated successfully. Nov 23 23:01:31.842631 containerd[1555]: time="2025-11-23T23:01:31.842578673Z" level=info msg="CreateContainer within sandbox \"deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b\"" Nov 23 23:01:31.845348 containerd[1555]: time="2025-11-23T23:01:31.843382916Z" level=info msg="StartContainer for \"d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b\"" Nov 23 23:01:31.846552 containerd[1555]: time="2025-11-23T23:01:31.846503079Z" level=info msg="connecting to shim d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b" address="unix:///run/containerd/s/82a637c727396d22a6d98a804ef97588cfef2e6615ca1ff32c3d50b525eb9fa6" protocol=ttrpc version=3 Nov 23 23:01:31.870745 systemd[1]: Started cri-containerd-d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b.scope - libcontainer container d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b. Nov 23 23:01:31.911484 containerd[1555]: time="2025-11-23T23:01:31.911432842Z" level=info msg="StartContainer for \"d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b\" returns successfully" Nov 23 23:01:32.488514 kubelet[2752]: I1123 23:01:32.488106 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-2wmg2" podStartSLOduration=1.279308383 podStartE2EDuration="3.487834972s" podCreationTimestamp="2025-11-23 23:01:29 +0000 UTC" firstStartedPulling="2025-11-23 23:01:29.608891086 +0000 UTC m=+5.867472401" lastFinishedPulling="2025-11-23 23:01:31.817417635 +0000 UTC m=+8.075998990" observedRunningTime="2025-11-23 23:01:31.941499978 +0000 UTC m=+8.200081333" watchObservedRunningTime="2025-11-23 23:01:32.487834972 +0000 UTC m=+8.746416327" Nov 23 23:01:36.298725 sudo[1807]: pam_unix(sudo:session): session closed for user root Nov 23 23:01:36.457263 sshd[1806]: Connection closed by 139.178.68.195 port 57906 Nov 23 23:01:36.457890 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Nov 23 23:01:36.466515 systemd[1]: sshd@6-49.12.4.178:22-139.178.68.195:57906.service: Deactivated successfully. Nov 23 23:01:36.476081 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 23:01:36.477462 systemd[1]: session-7.scope: Consumed 6.937s CPU time, 219.8M memory peak. Nov 23 23:01:36.482531 systemd-logind[1519]: Session 7 logged out. Waiting for processes to exit. Nov 23 23:01:36.486975 systemd-logind[1519]: Removed session 7. Nov 23 23:01:48.546458 systemd[1]: Created slice kubepods-besteffort-pod8373f21b_2da2_4451_9bb1_c63e488456cb.slice - libcontainer container kubepods-besteffort-pod8373f21b_2da2_4451_9bb1_c63e488456cb.slice. Nov 23 23:01:48.623325 kubelet[2752]: I1123 23:01:48.622899 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8373f21b-2da2-4451-9bb1-c63e488456cb-typha-certs\") pod \"calico-typha-fc6687cbf-6pv7h\" (UID: \"8373f21b-2da2-4451-9bb1-c63e488456cb\") " pod="calico-system/calico-typha-fc6687cbf-6pv7h" Nov 23 23:01:48.623844 kubelet[2752]: I1123 23:01:48.623445 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jxw\" (UniqueName: \"kubernetes.io/projected/8373f21b-2da2-4451-9bb1-c63e488456cb-kube-api-access-q6jxw\") pod \"calico-typha-fc6687cbf-6pv7h\" (UID: \"8373f21b-2da2-4451-9bb1-c63e488456cb\") " pod="calico-system/calico-typha-fc6687cbf-6pv7h" Nov 23 23:01:48.624024 kubelet[2752]: I1123 23:01:48.623965 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8373f21b-2da2-4451-9bb1-c63e488456cb-tigera-ca-bundle\") pod \"calico-typha-fc6687cbf-6pv7h\" (UID: \"8373f21b-2da2-4451-9bb1-c63e488456cb\") " pod="calico-system/calico-typha-fc6687cbf-6pv7h" Nov 23 23:01:48.673920 systemd[1]: Created slice kubepods-besteffort-pod4c4f048a_3bf5_45a9_8802_7244b240841c.slice - libcontainer container kubepods-besteffort-pod4c4f048a_3bf5_45a9_8802_7244b240841c.slice. Nov 23 23:01:48.724761 kubelet[2752]: I1123 23:01:48.724689 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-cni-net-dir\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.724924 kubelet[2752]: I1123 23:01:48.724771 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-flexvol-driver-host\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.724924 kubelet[2752]: I1123 23:01:48.724822 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-var-lib-calico\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.724924 kubelet[2752]: I1123 23:01:48.724883 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-lib-modules\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725049 kubelet[2752]: I1123 23:01:48.724921 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-var-run-calico\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725049 kubelet[2752]: I1123 23:01:48.724957 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsbpk\" (UniqueName: \"kubernetes.io/projected/4c4f048a-3bf5-45a9-8802-7244b240841c-kube-api-access-nsbpk\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725049 kubelet[2752]: I1123 23:01:48.724997 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4c4f048a-3bf5-45a9-8802-7244b240841c-node-certs\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725049 kubelet[2752]: I1123 23:01:48.725030 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-policysync\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725233 kubelet[2752]: I1123 23:01:48.725065 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-cni-log-dir\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725233 kubelet[2752]: I1123 23:01:48.725098 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-xtables-lock\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725233 kubelet[2752]: I1123 23:01:48.725159 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4c4f048a-3bf5-45a9-8802-7244b240841c-cni-bin-dir\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.725233 kubelet[2752]: I1123 23:01:48.725195 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c4f048a-3bf5-45a9-8802-7244b240841c-tigera-ca-bundle\") pod \"calico-node-j4dls\" (UID: \"4c4f048a-3bf5-45a9-8802-7244b240841c\") " pod="calico-system/calico-node-j4dls" Nov 23 23:01:48.797318 kubelet[2752]: E1123 23:01:48.797032 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:01:48.826818 kubelet[2752]: I1123 23:01:48.826350 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65c6ee75-f266-4d8e-9f91-7935bbe3f792-kubelet-dir\") pod \"csi-node-driver-qcdmk\" (UID: \"65c6ee75-f266-4d8e-9f91-7935bbe3f792\") " pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:48.830371 kubelet[2752]: I1123 23:01:48.828752 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/65c6ee75-f266-4d8e-9f91-7935bbe3f792-varrun\") pod \"csi-node-driver-qcdmk\" (UID: \"65c6ee75-f266-4d8e-9f91-7935bbe3f792\") " pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:48.830371 kubelet[2752]: I1123 23:01:48.828946 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/65c6ee75-f266-4d8e-9f91-7935bbe3f792-registration-dir\") pod \"csi-node-driver-qcdmk\" (UID: \"65c6ee75-f266-4d8e-9f91-7935bbe3f792\") " pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:48.830371 kubelet[2752]: I1123 23:01:48.828987 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/65c6ee75-f266-4d8e-9f91-7935bbe3f792-socket-dir\") pod \"csi-node-driver-qcdmk\" (UID: \"65c6ee75-f266-4d8e-9f91-7935bbe3f792\") " pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:48.830371 kubelet[2752]: I1123 23:01:48.829085 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jw57\" (UniqueName: \"kubernetes.io/projected/65c6ee75-f266-4d8e-9f91-7935bbe3f792-kube-api-access-4jw57\") pod \"csi-node-driver-qcdmk\" (UID: \"65c6ee75-f266-4d8e-9f91-7935bbe3f792\") " pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:48.835355 kubelet[2752]: E1123 23:01:48.833527 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.835479 kubelet[2752]: W1123 23:01:48.835360 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.835479 kubelet[2752]: E1123 23:01:48.835412 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.842631 kubelet[2752]: E1123 23:01:48.842549 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.842631 kubelet[2752]: W1123 23:01:48.842618 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.842796 kubelet[2752]: E1123 23:01:48.842643 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.852627 containerd[1555]: time="2025-11-23T23:01:48.852285536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc6687cbf-6pv7h,Uid:8373f21b-2da2-4451-9bb1-c63e488456cb,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:48.866241 kubelet[2752]: E1123 23:01:48.864487 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.866797 kubelet[2752]: W1123 23:01:48.866414 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.866797 kubelet[2752]: E1123 23:01:48.866451 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.890765 containerd[1555]: time="2025-11-23T23:01:48.890712409Z" level=info msg="connecting to shim 790e6dc6b597384b7ed549462ffb78fa8918e60ee320a7f3116c02e91212a455" address="unix:///run/containerd/s/af9985ff756d487b797b8c722718b3efd14dc522bd7c7b58b7bb0bec784e27a7" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:48.927511 systemd[1]: Started cri-containerd-790e6dc6b597384b7ed549462ffb78fa8918e60ee320a7f3116c02e91212a455.scope - libcontainer container 790e6dc6b597384b7ed549462ffb78fa8918e60ee320a7f3116c02e91212a455. Nov 23 23:01:48.930531 kubelet[2752]: E1123 23:01:48.930495 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.930531 kubelet[2752]: W1123 23:01:48.930521 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.930719 kubelet[2752]: E1123 23:01:48.930543 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.932011 kubelet[2752]: E1123 23:01:48.931799 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.932011 kubelet[2752]: W1123 23:01:48.931827 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.932011 kubelet[2752]: E1123 23:01:48.931845 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.932825 kubelet[2752]: E1123 23:01:48.932802 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.932825 kubelet[2752]: W1123 23:01:48.932821 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.933463 kubelet[2752]: E1123 23:01:48.932843 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.933893 kubelet[2752]: E1123 23:01:48.933674 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.933893 kubelet[2752]: W1123 23:01:48.933787 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.933893 kubelet[2752]: E1123 23:01:48.933806 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.935023 kubelet[2752]: E1123 23:01:48.934983 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.935023 kubelet[2752]: W1123 23:01:48.935012 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.936472 kubelet[2752]: E1123 23:01:48.936394 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.936472 kubelet[2752]: W1123 23:01:48.936433 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.936947 kubelet[2752]: E1123 23:01:48.936884 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.937114 kubelet[2752]: E1123 23:01:48.937075 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.938803 kubelet[2752]: E1123 23:01:48.938769 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.938803 kubelet[2752]: W1123 23:01:48.938790 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.939664 kubelet[2752]: E1123 23:01:48.939622 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.939664 kubelet[2752]: W1123 23:01:48.939646 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.940389 kubelet[2752]: E1123 23:01:48.939864 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.940389 kubelet[2752]: E1123 23:01:48.939893 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.940617 kubelet[2752]: E1123 23:01:48.940594 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.940731 kubelet[2752]: W1123 23:01:48.940615 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.940731 kubelet[2752]: E1123 23:01:48.940649 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.940905 kubelet[2752]: E1123 23:01:48.940874 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.940905 kubelet[2752]: W1123 23:01:48.940903 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.941303 kubelet[2752]: E1123 23:01:48.941223 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.941546 kubelet[2752]: E1123 23:01:48.941522 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.941546 kubelet[2752]: W1123 23:01:48.941542 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.941981 kubelet[2752]: E1123 23:01:48.941756 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.941981 kubelet[2752]: W1123 23:01:48.941766 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.942210 kubelet[2752]: E1123 23:01:48.942134 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.942210 kubelet[2752]: E1123 23:01:48.942159 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.942600 kubelet[2752]: E1123 23:01:48.942575 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.942600 kubelet[2752]: W1123 23:01:48.942594 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.942755 kubelet[2752]: E1123 23:01:48.942660 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.943501 kubelet[2752]: E1123 23:01:48.943479 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.943501 kubelet[2752]: W1123 23:01:48.943499 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.943663 kubelet[2752]: E1123 23:01:48.943588 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.943735 kubelet[2752]: E1123 23:01:48.943720 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.943735 kubelet[2752]: W1123 23:01:48.943732 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.943862 kubelet[2752]: E1123 23:01:48.943782 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.944196 kubelet[2752]: E1123 23:01:48.944174 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.944196 kubelet[2752]: W1123 23:01:48.944190 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.944442 kubelet[2752]: E1123 23:01:48.944226 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.944442 kubelet[2752]: E1123 23:01:48.944354 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.944442 kubelet[2752]: W1123 23:01:48.944363 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.945005 kubelet[2752]: E1123 23:01:48.944966 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.945422 kubelet[2752]: E1123 23:01:48.945399 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.945422 kubelet[2752]: W1123 23:01:48.945419 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.945769 kubelet[2752]: E1123 23:01:48.945491 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.945769 kubelet[2752]: E1123 23:01:48.945589 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.945769 kubelet[2752]: W1123 23:01:48.945601 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.945769 kubelet[2752]: E1123 23:01:48.945632 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.947046 kubelet[2752]: E1123 23:01:48.947019 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.947046 kubelet[2752]: W1123 23:01:48.947039 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.947366 kubelet[2752]: E1123 23:01:48.947172 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.947528 kubelet[2752]: E1123 23:01:48.947319 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.947869 kubelet[2752]: W1123 23:01:48.947692 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.947869 kubelet[2752]: E1123 23:01:48.947736 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.949498 kubelet[2752]: E1123 23:01:48.949380 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.949498 kubelet[2752]: W1123 23:01:48.949425 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.949498 kubelet[2752]: E1123 23:01:48.949480 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.950048 kubelet[2752]: E1123 23:01:48.949925 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.950303 kubelet[2752]: W1123 23:01:48.950151 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.950303 kubelet[2752]: E1123 23:01:48.950215 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.950866 kubelet[2752]: E1123 23:01:48.950806 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.951220 kubelet[2752]: W1123 23:01:48.951009 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.951220 kubelet[2752]: E1123 23:01:48.951079 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.951776 kubelet[2752]: E1123 23:01:48.951757 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.952495 kubelet[2752]: W1123 23:01:48.951944 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.952495 kubelet[2752]: E1123 23:01:48.951969 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.968083 kubelet[2752]: E1123 23:01:48.968054 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:48.968228 kubelet[2752]: W1123 23:01:48.968213 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:48.968365 kubelet[2752]: E1123 23:01:48.968321 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:48.979504 containerd[1555]: time="2025-11-23T23:01:48.979458321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4dls,Uid:4c4f048a-3bf5-45a9-8802-7244b240841c,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:49.009544 containerd[1555]: time="2025-11-23T23:01:49.009417876Z" level=info msg="connecting to shim badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55" address="unix:///run/containerd/s/d0fb1ba48ee97c2d941d4fac2efa4c3cafbaadb5700fe5d70db68afa66ad46fb" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:01:49.011431 containerd[1555]: time="2025-11-23T23:01:49.011377829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fc6687cbf-6pv7h,Uid:8373f21b-2da2-4451-9bb1-c63e488456cb,Namespace:calico-system,Attempt:0,} returns sandbox id \"790e6dc6b597384b7ed549462ffb78fa8918e60ee320a7f3116c02e91212a455\"" Nov 23 23:01:49.015223 containerd[1555]: time="2025-11-23T23:01:49.015079529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 23:01:49.037617 systemd[1]: Started cri-containerd-badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55.scope - libcontainer container badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55. Nov 23 23:01:49.080702 containerd[1555]: time="2025-11-23T23:01:49.080173797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4dls,Uid:4c4f048a-3bf5-45a9-8802-7244b240841c,Namespace:calico-system,Attempt:0,} returns sandbox id \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\"" Nov 23 23:01:50.589750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716534594.mount: Deactivated successfully. Nov 23 23:01:50.867829 kubelet[2752]: E1123 23:01:50.867681 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:01:51.490510 containerd[1555]: time="2025-11-23T23:01:51.490445863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:51.492098 containerd[1555]: time="2025-11-23T23:01:51.492044086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 23:01:51.492989 containerd[1555]: time="2025-11-23T23:01:51.492917898Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:51.496812 containerd[1555]: time="2025-11-23T23:01:51.496724753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:51.499674 containerd[1555]: time="2025-11-23T23:01:51.499612155Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.484482625s" Nov 23 23:01:51.499674 containerd[1555]: time="2025-11-23T23:01:51.499667156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 23:01:51.501326 containerd[1555]: time="2025-11-23T23:01:51.501102336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 23:01:51.520012 containerd[1555]: time="2025-11-23T23:01:51.519702045Z" level=info msg="CreateContainer within sandbox \"790e6dc6b597384b7ed549462ffb78fa8918e60ee320a7f3116c02e91212a455\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 23:01:51.531199 containerd[1555]: time="2025-11-23T23:01:51.531156810Z" level=info msg="Container 7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:51.535814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208835143.mount: Deactivated successfully. Nov 23 23:01:51.543457 containerd[1555]: time="2025-11-23T23:01:51.543388026Z" level=info msg="CreateContainer within sandbox \"790e6dc6b597384b7ed549462ffb78fa8918e60ee320a7f3116c02e91212a455\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f\"" Nov 23 23:01:51.544327 containerd[1555]: time="2025-11-23T23:01:51.544091276Z" level=info msg="StartContainer for \"7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f\"" Nov 23 23:01:51.546035 containerd[1555]: time="2025-11-23T23:01:51.545989143Z" level=info msg="connecting to shim 7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f" address="unix:///run/containerd/s/af9985ff756d487b797b8c722718b3efd14dc522bd7c7b58b7bb0bec784e27a7" protocol=ttrpc version=3 Nov 23 23:01:51.570576 systemd[1]: Started cri-containerd-7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f.scope - libcontainer container 7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f. Nov 23 23:01:51.621497 containerd[1555]: time="2025-11-23T23:01:51.621405791Z" level=info msg="StartContainer for \"7c337d066705a6972c8a317a9ec887d591e7c58e8a9e167ef069fd630f5bda6f\" returns successfully" Nov 23 23:01:52.038901 kubelet[2752]: E1123 23:01:52.038859 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.038901 kubelet[2752]: W1123 23:01:52.038890 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.039499 kubelet[2752]: E1123 23:01:52.038917 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.039499 kubelet[2752]: E1123 23:01:52.039427 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.039499 kubelet[2752]: W1123 23:01:52.039458 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.039671 kubelet[2752]: E1123 23:01:52.039529 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.040642 kubelet[2752]: E1123 23:01:52.040580 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.040642 kubelet[2752]: W1123 23:01:52.040638 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.040762 kubelet[2752]: E1123 23:01:52.040657 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.041247 kubelet[2752]: E1123 23:01:52.041215 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.041247 kubelet[2752]: W1123 23:01:52.041235 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.041483 kubelet[2752]: E1123 23:01:52.041251 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.041623 kubelet[2752]: E1123 23:01:52.041595 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.041684 kubelet[2752]: W1123 23:01:52.041626 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.041684 kubelet[2752]: E1123 23:01:52.041641 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.041951 kubelet[2752]: E1123 23:01:52.041931 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.041951 kubelet[2752]: W1123 23:01:52.041948 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.042050 kubelet[2752]: E1123 23:01:52.041960 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.043359 kubelet[2752]: E1123 23:01:52.043331 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.043359 kubelet[2752]: W1123 23:01:52.043349 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.043491 kubelet[2752]: E1123 23:01:52.043368 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.043819 kubelet[2752]: E1123 23:01:52.043797 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.043819 kubelet[2752]: W1123 23:01:52.043817 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.043928 kubelet[2752]: E1123 23:01:52.043832 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.044514 kubelet[2752]: E1123 23:01:52.044487 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.044514 kubelet[2752]: W1123 23:01:52.044506 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.044613 kubelet[2752]: E1123 23:01:52.044521 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.045117 kubelet[2752]: E1123 23:01:52.045093 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.045117 kubelet[2752]: W1123 23:01:52.045113 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.045340 kubelet[2752]: E1123 23:01:52.045131 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.045508 kubelet[2752]: E1123 23:01:52.045487 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.045508 kubelet[2752]: W1123 23:01:52.045506 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.045639 kubelet[2752]: E1123 23:01:52.045522 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.045938 kubelet[2752]: E1123 23:01:52.045916 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.045938 kubelet[2752]: W1123 23:01:52.045935 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.046025 kubelet[2752]: E1123 23:01:52.045949 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.046569 kubelet[2752]: E1123 23:01:52.046545 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.046569 kubelet[2752]: W1123 23:01:52.046565 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.046719 kubelet[2752]: E1123 23:01:52.046633 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.047166 kubelet[2752]: E1123 23:01:52.047141 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.047166 kubelet[2752]: W1123 23:01:52.047160 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.047327 kubelet[2752]: E1123 23:01:52.047174 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.047615 kubelet[2752]: E1123 23:01:52.047593 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.047615 kubelet[2752]: W1123 23:01:52.047610 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.047718 kubelet[2752]: E1123 23:01:52.047626 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.062967 kubelet[2752]: E1123 23:01:52.062937 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.063566 kubelet[2752]: W1123 23:01:52.063169 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.063566 kubelet[2752]: E1123 23:01:52.063200 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.064268 kubelet[2752]: E1123 23:01:52.064249 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.065254 kubelet[2752]: W1123 23:01:52.064436 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.065602 kubelet[2752]: E1123 23:01:52.065392 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.065813 kubelet[2752]: E1123 23:01:52.065796 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.065887 kubelet[2752]: W1123 23:01:52.065872 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.066018 kubelet[2752]: E1123 23:01:52.065968 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.066181 kubelet[2752]: E1123 23:01:52.066168 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.066315 kubelet[2752]: W1123 23:01:52.066254 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.066452 kubelet[2752]: E1123 23:01:52.066419 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.066723 kubelet[2752]: E1123 23:01:52.066706 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.066860 kubelet[2752]: W1123 23:01:52.066792 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.066860 kubelet[2752]: E1123 23:01:52.066823 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.067460 kubelet[2752]: E1123 23:01:52.067430 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.067460 kubelet[2752]: W1123 23:01:52.067452 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.067727 kubelet[2752]: E1123 23:01:52.067482 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.067900 kubelet[2752]: E1123 23:01:52.067873 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.067900 kubelet[2752]: W1123 23:01:52.067892 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.068318 kubelet[2752]: E1123 23:01:52.068240 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.068569 kubelet[2752]: E1123 23:01:52.068546 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.068569 kubelet[2752]: W1123 23:01:52.068565 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.068948 kubelet[2752]: E1123 23:01:52.068921 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.069481 kubelet[2752]: E1123 23:01:52.069452 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.069481 kubelet[2752]: W1123 23:01:52.069478 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.069693 kubelet[2752]: E1123 23:01:52.069522 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.070539 kubelet[2752]: E1123 23:01:52.070512 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.070539 kubelet[2752]: W1123 23:01:52.070535 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.071433 kubelet[2752]: E1123 23:01:52.071401 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.071727 kubelet[2752]: E1123 23:01:52.071696 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.071727 kubelet[2752]: W1123 23:01:52.071723 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.071884 kubelet[2752]: E1123 23:01:52.071826 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.072047 kubelet[2752]: E1123 23:01:52.072026 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.072047 kubelet[2752]: W1123 23:01:52.072044 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.072433 kubelet[2752]: E1123 23:01:52.072401 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.072700 kubelet[2752]: E1123 23:01:52.072677 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.072700 kubelet[2752]: W1123 23:01:52.072697 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.072786 kubelet[2752]: E1123 23:01:52.072721 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.074466 kubelet[2752]: E1123 23:01:52.074434 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.074466 kubelet[2752]: W1123 23:01:52.074458 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.074578 kubelet[2752]: E1123 23:01:52.074478 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.076086 kubelet[2752]: E1123 23:01:52.076054 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.076086 kubelet[2752]: W1123 23:01:52.076079 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.076209 kubelet[2752]: E1123 23:01:52.076101 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.078492 kubelet[2752]: E1123 23:01:52.076839 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.078644 kubelet[2752]: W1123 23:01:52.078496 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.078644 kubelet[2752]: E1123 23:01:52.078529 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.079127 kubelet[2752]: E1123 23:01:52.079103 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.079127 kubelet[2752]: W1123 23:01:52.079123 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.079219 kubelet[2752]: E1123 23:01:52.079142 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.079574 kubelet[2752]: E1123 23:01:52.079553 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:52.079574 kubelet[2752]: W1123 23:01:52.079570 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:52.079574 kubelet[2752]: E1123 23:01:52.079631 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:52.868923 kubelet[2752]: E1123 23:01:52.868689 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:01:52.994683 kubelet[2752]: I1123 23:01:52.994613 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:01:53.054026 kubelet[2752]: E1123 23:01:53.053897 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.054377 kubelet[2752]: W1123 23:01:53.053925 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.054786 kubelet[2752]: E1123 23:01:53.054398 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.054786 kubelet[2752]: E1123 23:01:53.054710 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.054993 kubelet[2752]: W1123 23:01:53.054722 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.054993 kubelet[2752]: E1123 23:01:53.054920 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.055226 kubelet[2752]: E1123 23:01:53.055210 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.055253 kubelet[2752]: W1123 23:01:53.055224 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.055253 kubelet[2752]: E1123 23:01:53.055236 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.055546 kubelet[2752]: E1123 23:01:53.055530 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.055546 kubelet[2752]: W1123 23:01:53.055545 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.055643 kubelet[2752]: E1123 23:01:53.055604 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.055866 kubelet[2752]: E1123 23:01:53.055840 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.055866 kubelet[2752]: W1123 23:01:53.055853 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.055866 kubelet[2752]: E1123 23:01:53.055864 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.056225 kubelet[2752]: E1123 23:01:53.056208 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.056225 kubelet[2752]: W1123 23:01:53.056223 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.056362 kubelet[2752]: E1123 23:01:53.056304 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.056638 kubelet[2752]: E1123 23:01:53.056587 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.056691 kubelet[2752]: W1123 23:01:53.056680 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.056715 kubelet[2752]: E1123 23:01:53.056696 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.057083 kubelet[2752]: E1123 23:01:53.057045 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.057083 kubelet[2752]: W1123 23:01:53.057062 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.057083 kubelet[2752]: E1123 23:01:53.057074 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.057545 kubelet[2752]: E1123 23:01:53.057505 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.057545 kubelet[2752]: W1123 23:01:53.057523 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.057545 kubelet[2752]: E1123 23:01:53.057534 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.057888 kubelet[2752]: E1123 23:01:53.057873 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.057888 kubelet[2752]: W1123 23:01:53.057886 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.057963 kubelet[2752]: E1123 23:01:53.057898 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.058115 kubelet[2752]: E1123 23:01:53.058101 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.058147 kubelet[2752]: W1123 23:01:53.058115 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.058147 kubelet[2752]: E1123 23:01:53.058125 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.058536 kubelet[2752]: E1123 23:01:53.058495 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.058536 kubelet[2752]: W1123 23:01:53.058514 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.058585 kubelet[2752]: E1123 23:01:53.058536 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.059396 kubelet[2752]: E1123 23:01:53.059168 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.059396 kubelet[2752]: W1123 23:01:53.059192 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.059396 kubelet[2752]: E1123 23:01:53.059269 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.059811 kubelet[2752]: E1123 23:01:53.059753 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.059869 kubelet[2752]: W1123 23:01:53.059812 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.059869 kubelet[2752]: E1123 23:01:53.059830 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.060163 kubelet[2752]: E1123 23:01:53.060143 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.060194 kubelet[2752]: W1123 23:01:53.060166 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.060194 kubelet[2752]: E1123 23:01:53.060182 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.074695 kubelet[2752]: E1123 23:01:53.074657 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.075183 kubelet[2752]: W1123 23:01:53.075153 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.075951 kubelet[2752]: E1123 23:01:53.075578 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.076733 kubelet[2752]: E1123 23:01:53.076446 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.076733 kubelet[2752]: W1123 23:01:53.076471 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.076733 kubelet[2752]: E1123 23:01:53.076502 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.077663 kubelet[2752]: E1123 23:01:53.077637 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.078036 kubelet[2752]: W1123 23:01:53.077824 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.078036 kubelet[2752]: E1123 23:01:53.077858 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.078626 kubelet[2752]: E1123 23:01:53.078481 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.078626 kubelet[2752]: W1123 23:01:53.078503 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.078877 kubelet[2752]: E1123 23:01:53.078748 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.079465 kubelet[2752]: E1123 23:01:53.079431 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.079465 kubelet[2752]: W1123 23:01:53.079455 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.079655 kubelet[2752]: E1123 23:01:53.079619 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.079716 kubelet[2752]: E1123 23:01:53.079703 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.079716 kubelet[2752]: W1123 23:01:53.079713 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.079817 kubelet[2752]: E1123 23:01:53.079792 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.079908 kubelet[2752]: E1123 23:01:53.079887 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.079908 kubelet[2752]: W1123 23:01:53.079898 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.079958 kubelet[2752]: E1123 23:01:53.079917 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.080115 kubelet[2752]: E1123 23:01:53.080041 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.080115 kubelet[2752]: W1123 23:01:53.080092 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.080115 kubelet[2752]: E1123 23:01:53.080103 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.080371 kubelet[2752]: E1123 23:01:53.080352 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.080371 kubelet[2752]: W1123 23:01:53.080367 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.081178 kubelet[2752]: E1123 23:01:53.080384 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.081178 kubelet[2752]: E1123 23:01:53.080582 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.081178 kubelet[2752]: W1123 23:01:53.080607 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.081178 kubelet[2752]: E1123 23:01:53.080618 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.081279 containerd[1555]: time="2025-11-23T23:01:53.080497465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:53.082205 kubelet[2752]: E1123 23:01:53.082155 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.082331 kubelet[2752]: W1123 23:01:53.082187 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.082502 containerd[1555]: time="2025-11-23T23:01:53.082468290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 23:01:53.083133 kubelet[2752]: E1123 23:01:53.083114 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.083182 kubelet[2752]: W1123 23:01:53.083165 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.083560 containerd[1555]: time="2025-11-23T23:01:53.083508463Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:53.083942 kubelet[2752]: E1123 23:01:53.083918 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.084136 kubelet[2752]: E1123 23:01:53.084082 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.084366 kubelet[2752]: E1123 23:01:53.084349 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.084411 kubelet[2752]: W1123 23:01:53.084367 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.084411 kubelet[2752]: E1123 23:01:53.084381 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.087117 kubelet[2752]: E1123 23:01:53.086517 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.087117 kubelet[2752]: W1123 23:01:53.086548 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.087117 kubelet[2752]: E1123 23:01:53.086575 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.087441 kubelet[2752]: E1123 23:01:53.087418 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.087538 kubelet[2752]: W1123 23:01:53.087518 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.087681 kubelet[2752]: E1123 23:01:53.087659 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.089440 containerd[1555]: time="2025-11-23T23:01:53.089384217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:53.090622 kubelet[2752]: E1123 23:01:53.090575 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.091333 kubelet[2752]: W1123 23:01:53.090767 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.091333 kubelet[2752]: E1123 23:01:53.090822 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.091677 kubelet[2752]: E1123 23:01:53.091658 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.091763 kubelet[2752]: W1123 23:01:53.091749 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.091850 kubelet[2752]: E1123 23:01:53.091836 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.092128 kubelet[2752]: E1123 23:01:53.092113 2752 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 23:01:53.092213 kubelet[2752]: W1123 23:01:53.092200 2752 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 23:01:53.092280 kubelet[2752]: E1123 23:01:53.092269 2752 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 23:01:53.092450 containerd[1555]: time="2025-11-23T23:01:53.092406656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.591261958s" Nov 23 23:01:53.092502 containerd[1555]: time="2025-11-23T23:01:53.092451296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 23:01:53.097712 containerd[1555]: time="2025-11-23T23:01:53.097663922Z" level=info msg="CreateContainer within sandbox \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 23:01:53.107671 containerd[1555]: time="2025-11-23T23:01:53.107608408Z" level=info msg="Container 7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:53.116516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018412413.mount: Deactivated successfully. Nov 23 23:01:53.127920 containerd[1555]: time="2025-11-23T23:01:53.125754158Z" level=info msg="CreateContainer within sandbox \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c\"" Nov 23 23:01:53.127920 containerd[1555]: time="2025-11-23T23:01:53.126466687Z" level=info msg="StartContainer for \"7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c\"" Nov 23 23:01:53.130716 containerd[1555]: time="2025-11-23T23:01:53.130431417Z" level=info msg="connecting to shim 7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c" address="unix:///run/containerd/s/d0fb1ba48ee97c2d941d4fac2efa4c3cafbaadb5700fe5d70db68afa66ad46fb" protocol=ttrpc version=3 Nov 23 23:01:53.154794 systemd[1]: Started cri-containerd-7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c.scope - libcontainer container 7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c. Nov 23 23:01:53.227825 containerd[1555]: time="2025-11-23T23:01:53.227785811Z" level=info msg="StartContainer for \"7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c\" returns successfully" Nov 23 23:01:53.250719 systemd[1]: cri-containerd-7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c.scope: Deactivated successfully. Nov 23 23:01:53.256756 containerd[1555]: time="2025-11-23T23:01:53.256678977Z" level=info msg="received container exit event container_id:\"7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c\" id:\"7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c\" pid:3414 exited_at:{seconds:1763938913 nanos:255572803}" Nov 23 23:01:53.282754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e5a6b37f41cd3615af93ad767e287ad4e3fa1a8b09978b605b34c1ad3c0526c-rootfs.mount: Deactivated successfully. Nov 23 23:01:54.004055 containerd[1555]: time="2025-11-23T23:01:54.003946682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 23:01:54.032326 kubelet[2752]: I1123 23:01:54.029251 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fc6687cbf-6pv7h" podStartSLOduration=3.543162295 podStartE2EDuration="6.029232503s" podCreationTimestamp="2025-11-23 23:01:48 +0000 UTC" firstStartedPulling="2025-11-23 23:01:49.014671603 +0000 UTC m=+25.273252958" lastFinishedPulling="2025-11-23 23:01:51.500741811 +0000 UTC m=+27.759323166" observedRunningTime="2025-11-23 23:01:52.018667502 +0000 UTC m=+28.277248897" watchObservedRunningTime="2025-11-23 23:01:54.029232503 +0000 UTC m=+30.287813858" Nov 23 23:01:54.868611 kubelet[2752]: E1123 23:01:54.868467 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:01:56.868222 kubelet[2752]: E1123 23:01:56.868171 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:01:57.379204 containerd[1555]: time="2025-11-23T23:01:57.379124962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:57.380560 containerd[1555]: time="2025-11-23T23:01:57.380514935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 23:01:57.380881 containerd[1555]: time="2025-11-23T23:01:57.380830658Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:57.384172 containerd[1555]: time="2025-11-23T23:01:57.384134051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:01:57.384815 containerd[1555]: time="2025-11-23T23:01:57.384784257Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.380786934s" Nov 23 23:01:57.384980 containerd[1555]: time="2025-11-23T23:01:57.384953459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 23:01:57.388369 containerd[1555]: time="2025-11-23T23:01:57.388143530Z" level=info msg="CreateContainer within sandbox \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 23:01:57.407934 containerd[1555]: time="2025-11-23T23:01:57.406308908Z" level=info msg="Container d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:01:57.416434 containerd[1555]: time="2025-11-23T23:01:57.416371966Z" level=info msg="CreateContainer within sandbox \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b\"" Nov 23 23:01:57.418644 containerd[1555]: time="2025-11-23T23:01:57.417535578Z" level=info msg="StartContainer for \"d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b\"" Nov 23 23:01:57.421004 containerd[1555]: time="2025-11-23T23:01:57.420917211Z" level=info msg="connecting to shim d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b" address="unix:///run/containerd/s/d0fb1ba48ee97c2d941d4fac2efa4c3cafbaadb5700fe5d70db68afa66ad46fb" protocol=ttrpc version=3 Nov 23 23:01:57.449532 systemd[1]: Started cri-containerd-d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b.scope - libcontainer container d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b. Nov 23 23:01:57.524914 containerd[1555]: time="2025-11-23T23:01:57.524855548Z" level=info msg="StartContainer for \"d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b\" returns successfully" Nov 23 23:01:58.067679 containerd[1555]: time="2025-11-23T23:01:58.067587379Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 23:01:58.073126 systemd[1]: cri-containerd-d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b.scope: Deactivated successfully. Nov 23 23:01:58.073415 systemd[1]: cri-containerd-d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b.scope: Consumed 502ms CPU time, 186.8M memory peak, 165.9M written to disk. Nov 23 23:01:58.079558 containerd[1555]: time="2025-11-23T23:01:58.079509729Z" level=info msg="received container exit event container_id:\"d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b\" id:\"d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b\" pid:3472 exited_at:{seconds:1763938918 nanos:78584040}" Nov 23 23:01:58.110884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3d63ec7754b9102604e77152e4adc9294c025e38b4b94e446a88c1185a8b53b-rootfs.mount: Deactivated successfully. Nov 23 23:01:58.137837 kubelet[2752]: I1123 23:01:58.137760 2752 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 23:01:58.185455 systemd[1]: Created slice kubepods-burstable-pod03f9e6e9_b3a5_4fb3_a283_2563920974fa.slice - libcontainer container kubepods-burstable-pod03f9e6e9_b3a5_4fb3_a283_2563920974fa.slice. Nov 23 23:01:58.219575 kubelet[2752]: I1123 23:01:58.218331 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03f9e6e9-b3a5-4fb3-a283-2563920974fa-config-volume\") pod \"coredns-668d6bf9bc-xms8x\" (UID: \"03f9e6e9-b3a5-4fb3-a283-2563920974fa\") " pod="kube-system/coredns-668d6bf9bc-xms8x" Nov 23 23:01:58.219575 kubelet[2752]: I1123 23:01:58.218382 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4x2\" (UniqueName: \"kubernetes.io/projected/03f9e6e9-b3a5-4fb3-a283-2563920974fa-kube-api-access-7q4x2\") pod \"coredns-668d6bf9bc-xms8x\" (UID: \"03f9e6e9-b3a5-4fb3-a283-2563920974fa\") " pod="kube-system/coredns-668d6bf9bc-xms8x" Nov 23 23:01:58.218995 systemd[1]: Created slice kubepods-besteffort-pod8e7fff62_849b_430a_8c5a_7b0e171a5c60.slice - libcontainer container kubepods-besteffort-pod8e7fff62_849b_430a_8c5a_7b0e171a5c60.slice. Nov 23 23:01:58.226880 kubelet[2752]: W1123 23:01:58.226211 2752 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4459-2-1-d-6a40a07c08" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-2-1-d-6a40a07c08' and this object Nov 23 23:01:58.228331 kubelet[2752]: E1123 23:01:58.227053 2752 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4459-2-1-d-6a40a07c08\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-1-d-6a40a07c08' and this object" logger="UnhandledError" Nov 23 23:01:58.228331 kubelet[2752]: W1123 23:01:58.227129 2752 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4459-2-1-d-6a40a07c08" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-2-1-d-6a40a07c08' and this object Nov 23 23:01:58.228331 kubelet[2752]: E1123 23:01:58.227142 2752 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4459-2-1-d-6a40a07c08\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-1-d-6a40a07c08' and this object" logger="UnhandledError" Nov 23 23:01:58.228331 kubelet[2752]: W1123 23:01:58.227177 2752 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4459-2-1-d-6a40a07c08" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-2-1-d-6a40a07c08' and this object Nov 23 23:01:58.228510 kubelet[2752]: E1123 23:01:58.227189 2752 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4459-2-1-d-6a40a07c08\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-1-d-6a40a07c08' and this object" logger="UnhandledError" Nov 23 23:01:58.233243 systemd[1]: Created slice kubepods-besteffort-pod3b014a55_de73_4ac9_9e35_2cc72ed4bcca.slice - libcontainer container kubepods-besteffort-pod3b014a55_de73_4ac9_9e35_2cc72ed4bcca.slice. Nov 23 23:01:58.242287 systemd[1]: Created slice kubepods-besteffort-podf1e9d56c_d386_4be6_909e_83d2bc375abf.slice - libcontainer container kubepods-besteffort-podf1e9d56c_d386_4be6_909e_83d2bc375abf.slice. Nov 23 23:01:58.252323 systemd[1]: Created slice kubepods-besteffort-pod1e722cd7_3fb4_43d9_b64b_32096b2087bd.slice - libcontainer container kubepods-besteffort-pod1e722cd7_3fb4_43d9_b64b_32096b2087bd.slice. Nov 23 23:01:58.260388 systemd[1]: Created slice kubepods-burstable-pod43989c61_ebed_4d18_99cf_851dcb1b5eb3.slice - libcontainer container kubepods-burstable-pod43989c61_ebed_4d18_99cf_851dcb1b5eb3.slice. Nov 23 23:01:58.270723 systemd[1]: Created slice kubepods-besteffort-pod0f266d4c_4f00_43ea_b251_4bdc9532cfcf.slice - libcontainer container kubepods-besteffort-pod0f266d4c_4f00_43ea_b251_4bdc9532cfcf.slice. Nov 23 23:01:58.281928 systemd[1]: Created slice kubepods-besteffort-podc25375d2_2332_49bd_a8e3_61dfcb956c34.slice - libcontainer container kubepods-besteffort-podc25375d2_2332_49bd_a8e3_61dfcb956c34.slice. Nov 23 23:01:58.319768 kubelet[2752]: I1123 23:01:58.319354 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmwl\" (UniqueName: \"kubernetes.io/projected/8e7fff62-849b-430a-8c5a-7b0e171a5c60-kube-api-access-mbmwl\") pod \"calico-apiserver-5b9f97f6d6-lkrpd\" (UID: \"8e7fff62-849b-430a-8c5a-7b0e171a5c60\") " pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" Nov 23 23:01:58.319768 kubelet[2752]: I1123 23:01:58.319712 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-goldmane-key-pair\") pod \"goldmane-666569f655-msv7x\" (UID: \"0f266d4c-4f00-43ea-b251-4bdc9532cfcf\") " pod="calico-system/goldmane-666569f655-msv7x" Nov 23 23:01:58.320432 kubelet[2752]: I1123 23:01:58.320249 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43989c61-ebed-4d18-99cf-851dcb1b5eb3-config-volume\") pod \"coredns-668d6bf9bc-pllfh\" (UID: \"43989c61-ebed-4d18-99cf-851dcb1b5eb3\") " pod="kube-system/coredns-668d6bf9bc-pllfh" Nov 23 23:01:58.320852 kubelet[2752]: I1123 23:01:58.320745 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-goldmane-ca-bundle\") pod \"goldmane-666569f655-msv7x\" (UID: \"0f266d4c-4f00-43ea-b251-4bdc9532cfcf\") " pod="calico-system/goldmane-666569f655-msv7x" Nov 23 23:01:58.321132 kubelet[2752]: I1123 23:01:58.321072 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klfm6\" (UniqueName: \"kubernetes.io/projected/43989c61-ebed-4d18-99cf-851dcb1b5eb3-kube-api-access-klfm6\") pod \"coredns-668d6bf9bc-pllfh\" (UID: \"43989c61-ebed-4d18-99cf-851dcb1b5eb3\") " pod="kube-system/coredns-668d6bf9bc-pllfh" Nov 23 23:01:58.321522 kubelet[2752]: I1123 23:01:58.321410 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-backend-key-pair\") pod \"whisker-7cff5b69f8-fnl2z\" (UID: \"f1e9d56c-d386-4be6-909e-83d2bc375abf\") " pod="calico-system/whisker-7cff5b69f8-fnl2z" Nov 23 23:01:58.321972 kubelet[2752]: I1123 23:01:58.321787 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rsnf\" (UniqueName: \"kubernetes.io/projected/3b014a55-de73-4ac9-9e35-2cc72ed4bcca-kube-api-access-9rsnf\") pod \"calico-apiserver-5b9f97f6d6-6nxhq\" (UID: \"3b014a55-de73-4ac9-9e35-2cc72ed4bcca\") " pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" Nov 23 23:01:58.323382 kubelet[2752]: I1123 23:01:58.323277 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-ca-bundle\") pod \"whisker-7cff5b69f8-fnl2z\" (UID: \"f1e9d56c-d386-4be6-909e-83d2bc375abf\") " pod="calico-system/whisker-7cff5b69f8-fnl2z" Nov 23 23:01:58.323382 kubelet[2752]: I1123 23:01:58.323360 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dqb\" (UniqueName: \"kubernetes.io/projected/1e722cd7-3fb4-43d9-b64b-32096b2087bd-kube-api-access-w8dqb\") pod \"calico-apiserver-5f7f78c7-d8v97\" (UID: \"1e722cd7-3fb4-43d9-b64b-32096b2087bd\") " pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" Nov 23 23:01:58.323537 kubelet[2752]: I1123 23:01:58.323520 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpn8\" (UniqueName: \"kubernetes.io/projected/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-kube-api-access-6bpn8\") pod \"goldmane-666569f655-msv7x\" (UID: \"0f266d4c-4f00-43ea-b251-4bdc9532cfcf\") " pod="calico-system/goldmane-666569f655-msv7x" Nov 23 23:01:58.323691 kubelet[2752]: I1123 23:01:58.323607 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztmvb\" (UniqueName: \"kubernetes.io/projected/f1e9d56c-d386-4be6-909e-83d2bc375abf-kube-api-access-ztmvb\") pod \"whisker-7cff5b69f8-fnl2z\" (UID: \"f1e9d56c-d386-4be6-909e-83d2bc375abf\") " pod="calico-system/whisker-7cff5b69f8-fnl2z" Nov 23 23:01:58.323800 kubelet[2752]: I1123 23:01:58.323784 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e722cd7-3fb4-43d9-b64b-32096b2087bd-calico-apiserver-certs\") pod \"calico-apiserver-5f7f78c7-d8v97\" (UID: \"1e722cd7-3fb4-43d9-b64b-32096b2087bd\") " pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" Nov 23 23:01:58.323936 kubelet[2752]: I1123 23:01:58.323897 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b014a55-de73-4ac9-9e35-2cc72ed4bcca-calico-apiserver-certs\") pod \"calico-apiserver-5b9f97f6d6-6nxhq\" (UID: \"3b014a55-de73-4ac9-9e35-2cc72ed4bcca\") " pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" Nov 23 23:01:58.323982 kubelet[2752]: I1123 23:01:58.323963 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e7fff62-849b-430a-8c5a-7b0e171a5c60-calico-apiserver-certs\") pod \"calico-apiserver-5b9f97f6d6-lkrpd\" (UID: \"8e7fff62-849b-430a-8c5a-7b0e171a5c60\") " pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" Nov 23 23:01:58.324096 kubelet[2752]: I1123 23:01:58.323987 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c25375d2-2332-49bd-a8e3-61dfcb956c34-tigera-ca-bundle\") pod \"calico-kube-controllers-56947b74b7-c65fq\" (UID: \"c25375d2-2332-49bd-a8e3-61dfcb956c34\") " pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" Nov 23 23:01:58.324096 kubelet[2752]: I1123 23:01:58.324012 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-config\") pod \"goldmane-666569f655-msv7x\" (UID: \"0f266d4c-4f00-43ea-b251-4bdc9532cfcf\") " pod="calico-system/goldmane-666569f655-msv7x" Nov 23 23:01:58.324096 kubelet[2752]: I1123 23:01:58.324047 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2rc\" (UniqueName: \"kubernetes.io/projected/c25375d2-2332-49bd-a8e3-61dfcb956c34-kube-api-access-cf2rc\") pod \"calico-kube-controllers-56947b74b7-c65fq\" (UID: \"c25375d2-2332-49bd-a8e3-61dfcb956c34\") " pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" Nov 23 23:01:58.497246 containerd[1555]: time="2025-11-23T23:01:58.497196481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xms8x,Uid:03f9e6e9-b3a5-4fb3-a283-2563920974fa,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:58.530108 containerd[1555]: time="2025-11-23T23:01:58.530066423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-lkrpd,Uid:8e7fff62-849b-430a-8c5a-7b0e171a5c60,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:01:58.541882 containerd[1555]: time="2025-11-23T23:01:58.541503768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-6nxhq,Uid:3b014a55-de73-4ac9-9e35-2cc72ed4bcca,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:01:58.555203 containerd[1555]: time="2025-11-23T23:01:58.554402766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cff5b69f8-fnl2z,Uid:f1e9d56c-d386-4be6-909e-83d2bc375abf,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.561262 containerd[1555]: time="2025-11-23T23:01:58.561221629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7f78c7-d8v97,Uid:1e722cd7-3fb4-43d9-b64b-32096b2087bd,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:01:58.565361 containerd[1555]: time="2025-11-23T23:01:58.565101744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pllfh,Uid:43989c61-ebed-4d18-99cf-851dcb1b5eb3,Namespace:kube-system,Attempt:0,}" Nov 23 23:01:58.586508 containerd[1555]: time="2025-11-23T23:01:58.586241978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56947b74b7-c65fq,Uid:c25375d2-2332-49bd-a8e3-61dfcb956c34,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.694426 containerd[1555]: time="2025-11-23T23:01:58.694369810Z" level=error msg="Failed to destroy network for sandbox \"e8f7559267d4985fa9929c360b34672d0da2b0af66902863fc7a6caea909a30e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.701075 containerd[1555]: time="2025-11-23T23:01:58.701016191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-lkrpd,Uid:8e7fff62-849b-430a-8c5a-7b0e171a5c60,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f7559267d4985fa9929c360b34672d0da2b0af66902863fc7a6caea909a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.701848 kubelet[2752]: E1123 23:01:58.701251 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f7559267d4985fa9929c360b34672d0da2b0af66902863fc7a6caea909a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.701848 kubelet[2752]: E1123 23:01:58.701341 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f7559267d4985fa9929c360b34672d0da2b0af66902863fc7a6caea909a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" Nov 23 23:01:58.701848 kubelet[2752]: E1123 23:01:58.701362 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f7559267d4985fa9929c360b34672d0da2b0af66902863fc7a6caea909a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" Nov 23 23:01:58.702007 kubelet[2752]: E1123 23:01:58.701415 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9f97f6d6-lkrpd_calico-apiserver(8e7fff62-849b-430a-8c5a-7b0e171a5c60)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9f97f6d6-lkrpd_calico-apiserver(8e7fff62-849b-430a-8c5a-7b0e171a5c60)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8f7559267d4985fa9929c360b34672d0da2b0af66902863fc7a6caea909a30e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:01:58.721580 containerd[1555]: time="2025-11-23T23:01:58.721525699Z" level=error msg="Failed to destroy network for sandbox \"9a46e2196c806f5ccd99cef94e31ac7ebd881f3901ec479cdf516f29233f848c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.724013 containerd[1555]: time="2025-11-23T23:01:58.723904241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pllfh,Uid:43989c61-ebed-4d18-99cf-851dcb1b5eb3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a46e2196c806f5ccd99cef94e31ac7ebd881f3901ec479cdf516f29233f848c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.725013 kubelet[2752]: E1123 23:01:58.724783 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a46e2196c806f5ccd99cef94e31ac7ebd881f3901ec479cdf516f29233f848c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.725013 kubelet[2752]: E1123 23:01:58.724873 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a46e2196c806f5ccd99cef94e31ac7ebd881f3901ec479cdf516f29233f848c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pllfh" Nov 23 23:01:58.725013 kubelet[2752]: E1123 23:01:58.724895 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a46e2196c806f5ccd99cef94e31ac7ebd881f3901ec479cdf516f29233f848c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pllfh" Nov 23 23:01:58.725159 kubelet[2752]: E1123 23:01:58.724952 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pllfh_kube-system(43989c61-ebed-4d18-99cf-851dcb1b5eb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pllfh_kube-system(43989c61-ebed-4d18-99cf-851dcb1b5eb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a46e2196c806f5ccd99cef94e31ac7ebd881f3901ec479cdf516f29233f848c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pllfh" podUID="43989c61-ebed-4d18-99cf-851dcb1b5eb3" Nov 23 23:01:58.735968 containerd[1555]: time="2025-11-23T23:01:58.735873791Z" level=error msg="Failed to destroy network for sandbox \"7044c2ed81c467a2d9e6c14455e4c257689d36d2d5c1e5001f19ff4b9591b87a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.737951 containerd[1555]: time="2025-11-23T23:01:58.737832929Z" level=error msg="Failed to destroy network for sandbox \"06d691dcbb18a35540c6874b71f21a9bc911860bbbeadcc52abff9f3ab72fa4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.738177 containerd[1555]: time="2025-11-23T23:01:58.738133652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xms8x,Uid:03f9e6e9-b3a5-4fb3-a283-2563920974fa,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7044c2ed81c467a2d9e6c14455e4c257689d36d2d5c1e5001f19ff4b9591b87a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.738631 kubelet[2752]: E1123 23:01:58.738574 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7044c2ed81c467a2d9e6c14455e4c257689d36d2d5c1e5001f19ff4b9591b87a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.739414 kubelet[2752]: E1123 23:01:58.738756 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7044c2ed81c467a2d9e6c14455e4c257689d36d2d5c1e5001f19ff4b9591b87a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xms8x" Nov 23 23:01:58.739414 kubelet[2752]: E1123 23:01:58.739369 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7044c2ed81c467a2d9e6c14455e4c257689d36d2d5c1e5001f19ff4b9591b87a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xms8x" Nov 23 23:01:58.739933 kubelet[2752]: E1123 23:01:58.739560 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xms8x_kube-system(03f9e6e9-b3a5-4fb3-a283-2563920974fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xms8x_kube-system(03f9e6e9-b3a5-4fb3-a283-2563920974fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7044c2ed81c467a2d9e6c14455e4c257689d36d2d5c1e5001f19ff4b9591b87a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xms8x" podUID="03f9e6e9-b3a5-4fb3-a283-2563920974fa" Nov 23 23:01:58.739933 kubelet[2752]: E1123 23:01:58.739886 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d691dcbb18a35540c6874b71f21a9bc911860bbbeadcc52abff9f3ab72fa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.740048 containerd[1555]: time="2025-11-23T23:01:58.739708946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-6nxhq,Uid:3b014a55-de73-4ac9-9e35-2cc72ed4bcca,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d691dcbb18a35540c6874b71f21a9bc911860bbbeadcc52abff9f3ab72fa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.740101 kubelet[2752]: E1123 23:01:58.739963 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d691dcbb18a35540c6874b71f21a9bc911860bbbeadcc52abff9f3ab72fa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" Nov 23 23:01:58.740101 kubelet[2752]: E1123 23:01:58.739993 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d691dcbb18a35540c6874b71f21a9bc911860bbbeadcc52abff9f3ab72fa4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" Nov 23 23:01:58.740101 kubelet[2752]: E1123 23:01:58.740037 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06d691dcbb18a35540c6874b71f21a9bc911860bbbeadcc52abff9f3ab72fa4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:01:58.774853 containerd[1555]: time="2025-11-23T23:01:58.774799508Z" level=error msg="Failed to destroy network for sandbox \"0232ddb74774d9430b87dc4579357f03a6ac9380e069f0e3c96ad8946e8b9217\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.779409 containerd[1555]: time="2025-11-23T23:01:58.779347190Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cff5b69f8-fnl2z,Uid:f1e9d56c-d386-4be6-909e-83d2bc375abf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0232ddb74774d9430b87dc4579357f03a6ac9380e069f0e3c96ad8946e8b9217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.780370 kubelet[2752]: E1123 23:01:58.779594 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0232ddb74774d9430b87dc4579357f03a6ac9380e069f0e3c96ad8946e8b9217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.780370 kubelet[2752]: E1123 23:01:58.779666 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0232ddb74774d9430b87dc4579357f03a6ac9380e069f0e3c96ad8946e8b9217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cff5b69f8-fnl2z" Nov 23 23:01:58.780370 kubelet[2752]: E1123 23:01:58.779685 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0232ddb74774d9430b87dc4579357f03a6ac9380e069f0e3c96ad8946e8b9217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7cff5b69f8-fnl2z" Nov 23 23:01:58.780498 kubelet[2752]: E1123 23:01:58.779731 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7cff5b69f8-fnl2z_calico-system(f1e9d56c-d386-4be6-909e-83d2bc375abf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7cff5b69f8-fnl2z_calico-system(f1e9d56c-d386-4be6-909e-83d2bc375abf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0232ddb74774d9430b87dc4579357f03a6ac9380e069f0e3c96ad8946e8b9217\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7cff5b69f8-fnl2z" podUID="f1e9d56c-d386-4be6-909e-83d2bc375abf" Nov 23 23:01:58.781629 containerd[1555]: time="2025-11-23T23:01:58.781575090Z" level=error msg="Failed to destroy network for sandbox \"da9b2a39495c48cfc98de458299f72652d18bccee6ab4ee8d2d76c45eb4dfe09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.783883 containerd[1555]: time="2025-11-23T23:01:58.783832511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56947b74b7-c65fq,Uid:c25375d2-2332-49bd-a8e3-61dfcb956c34,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"da9b2a39495c48cfc98de458299f72652d18bccee6ab4ee8d2d76c45eb4dfe09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.784307 kubelet[2752]: E1123 23:01:58.784251 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da9b2a39495c48cfc98de458299f72652d18bccee6ab4ee8d2d76c45eb4dfe09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.784421 kubelet[2752]: E1123 23:01:58.784399 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da9b2a39495c48cfc98de458299f72652d18bccee6ab4ee8d2d76c45eb4dfe09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" Nov 23 23:01:58.784576 kubelet[2752]: E1123 23:01:58.784468 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da9b2a39495c48cfc98de458299f72652d18bccee6ab4ee8d2d76c45eb4dfe09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" Nov 23 23:01:58.785095 kubelet[2752]: E1123 23:01:58.784516 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56947b74b7-c65fq_calico-system(c25375d2-2332-49bd-a8e3-61dfcb956c34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56947b74b7-c65fq_calico-system(c25375d2-2332-49bd-a8e3-61dfcb956c34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da9b2a39495c48cfc98de458299f72652d18bccee6ab4ee8d2d76c45eb4dfe09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:01:58.785194 containerd[1555]: time="2025-11-23T23:01:58.784886481Z" level=error msg="Failed to destroy network for sandbox \"3ec5e5959bbbda7fecee8e8981bf27a5aa9aaf51c152613948c9bfaed593e889\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.786948 containerd[1555]: time="2025-11-23T23:01:58.786905299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7f78c7-d8v97,Uid:1e722cd7-3fb4-43d9-b64b-32096b2087bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec5e5959bbbda7fecee8e8981bf27a5aa9aaf51c152613948c9bfaed593e889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.787849 kubelet[2752]: E1123 23:01:58.787285 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec5e5959bbbda7fecee8e8981bf27a5aa9aaf51c152613948c9bfaed593e889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.787935 kubelet[2752]: E1123 23:01:58.787876 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec5e5959bbbda7fecee8e8981bf27a5aa9aaf51c152613948c9bfaed593e889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" Nov 23 23:01:58.787935 kubelet[2752]: E1123 23:01:58.787896 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec5e5959bbbda7fecee8e8981bf27a5aa9aaf51c152613948c9bfaed593e889\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" Nov 23 23:01:58.787981 kubelet[2752]: E1123 23:01:58.787944 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f7f78c7-d8v97_calico-apiserver(1e722cd7-3fb4-43d9-b64b-32096b2087bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f7f78c7-d8v97_calico-apiserver(1e722cd7-3fb4-43d9-b64b-32096b2087bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ec5e5959bbbda7fecee8e8981bf27a5aa9aaf51c152613948c9bfaed593e889\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:01:58.878058 systemd[1]: Created slice kubepods-besteffort-pod65c6ee75_f266_4d8e_9f91_7935bbe3f792.slice - libcontainer container kubepods-besteffort-pod65c6ee75_f266_4d8e_9f91_7935bbe3f792.slice. Nov 23 23:01:58.882488 containerd[1555]: time="2025-11-23T23:01:58.882444456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qcdmk,Uid:65c6ee75-f266-4d8e-9f91-7935bbe3f792,Namespace:calico-system,Attempt:0,}" Nov 23 23:01:58.945443 containerd[1555]: time="2025-11-23T23:01:58.945330273Z" level=error msg="Failed to destroy network for sandbox \"4d1056dbae47558b65a89dd4ae0a97621d9549f8314e9bd80558b27b3a970e20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.946947 containerd[1555]: time="2025-11-23T23:01:58.946832087Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qcdmk,Uid:65c6ee75-f266-4d8e-9f91-7935bbe3f792,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1056dbae47558b65a89dd4ae0a97621d9549f8314e9bd80558b27b3a970e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.947115 kubelet[2752]: E1123 23:01:58.947055 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1056dbae47558b65a89dd4ae0a97621d9549f8314e9bd80558b27b3a970e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:01:58.947166 kubelet[2752]: E1123 23:01:58.947135 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1056dbae47558b65a89dd4ae0a97621d9549f8314e9bd80558b27b3a970e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:58.947225 kubelet[2752]: E1123 23:01:58.947155 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1056dbae47558b65a89dd4ae0a97621d9549f8314e9bd80558b27b3a970e20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qcdmk" Nov 23 23:01:58.947254 kubelet[2752]: E1123 23:01:58.947211 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d1056dbae47558b65a89dd4ae0a97621d9549f8314e9bd80558b27b3a970e20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:01:59.038208 containerd[1555]: time="2025-11-23T23:01:59.038137824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 23:01:59.425634 kubelet[2752]: E1123 23:01:59.425578 2752 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Nov 23 23:01:59.426022 kubelet[2752]: E1123 23:01:59.425694 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-config podName:0f266d4c-4f00-43ea-b251-4bdc9532cfcf nodeName:}" failed. No retries permitted until 2025-11-23 23:01:59.925668557 +0000 UTC m=+36.184249912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-config") pod "goldmane-666569f655-msv7x" (UID: "0f266d4c-4f00-43ea-b251-4bdc9532cfcf") : failed to sync configmap cache: timed out waiting for the condition Nov 23 23:01:59.443329 kubelet[2752]: E1123 23:01:59.442921 2752 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Nov 23 23:01:59.443329 kubelet[2752]: E1123 23:01:59.443019 2752 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-goldmane-key-pair podName:0f266d4c-4f00-43ea-b251-4bdc9532cfcf nodeName:}" failed. No retries permitted until 2025-11-23 23:01:59.942997546 +0000 UTC m=+36.201578901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/0f266d4c-4f00-43ea-b251-4bdc9532cfcf-goldmane-key-pair") pod "goldmane-666569f655-msv7x" (UID: "0f266d4c-4f00-43ea-b251-4bdc9532cfcf") : failed to sync secret cache: timed out waiting for the condition Nov 23 23:02:00.078711 containerd[1555]: time="2025-11-23T23:02:00.078598054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msv7x,Uid:0f266d4c-4f00-43ea-b251-4bdc9532cfcf,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:00.138396 containerd[1555]: time="2025-11-23T23:02:00.138162494Z" level=error msg="Failed to destroy network for sandbox \"6233f041a70145a680671bf2ff4980f22625e78ad6e15473cf0bf4aaa50f420d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:02:00.142660 containerd[1555]: time="2025-11-23T23:02:00.142585570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msv7x,Uid:0f266d4c-4f00-43ea-b251-4bdc9532cfcf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6233f041a70145a680671bf2ff4980f22625e78ad6e15473cf0bf4aaa50f420d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:02:00.142810 systemd[1]: run-netns-cni\x2d884f4b0d\x2da647\x2dd70f\x2dc28b\x2dc4eb64d0115a.mount: Deactivated successfully. Nov 23 23:02:00.143524 kubelet[2752]: E1123 23:02:00.143456 2752 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6233f041a70145a680671bf2ff4980f22625e78ad6e15473cf0bf4aaa50f420d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 23:02:00.143646 kubelet[2752]: E1123 23:02:00.143531 2752 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6233f041a70145a680671bf2ff4980f22625e78ad6e15473cf0bf4aaa50f420d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-msv7x" Nov 23 23:02:00.143646 kubelet[2752]: E1123 23:02:00.143560 2752 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6233f041a70145a680671bf2ff4980f22625e78ad6e15473cf0bf4aaa50f420d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-msv7x" Nov 23 23:02:00.143778 kubelet[2752]: E1123 23:02:00.143667 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-msv7x_calico-system(0f266d4c-4f00-43ea-b251-4bdc9532cfcf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-msv7x_calico-system(0f266d4c-4f00-43ea-b251-4bdc9532cfcf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6233f041a70145a680671bf2ff4980f22625e78ad6e15473cf0bf4aaa50f420d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:02:05.525913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091520090.mount: Deactivated successfully. Nov 23 23:02:05.546884 containerd[1555]: time="2025-11-23T23:02:05.546822403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:05.548558 containerd[1555]: time="2025-11-23T23:02:05.548212051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 23:02:05.562783 containerd[1555]: time="2025-11-23T23:02:05.562693096Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:05.567388 containerd[1555]: time="2025-11-23T23:02:05.567315203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 23:02:05.568451 containerd[1555]: time="2025-11-23T23:02:05.568280769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.529839583s" Nov 23 23:02:05.568451 containerd[1555]: time="2025-11-23T23:02:05.568338529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 23:02:05.589199 containerd[1555]: time="2025-11-23T23:02:05.589148531Z" level=info msg="CreateContainer within sandbox \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 23:02:05.613324 containerd[1555]: time="2025-11-23T23:02:05.611616582Z" level=info msg="Container f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:05.627047 containerd[1555]: time="2025-11-23T23:02:05.626981871Z" level=info msg="CreateContainer within sandbox \"badcb4a7ef0ed36f969fa175baac7f933272c89e963a2d13281ca8dda4cbdc55\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8\"" Nov 23 23:02:05.629338 containerd[1555]: time="2025-11-23T23:02:05.628421800Z" level=info msg="StartContainer for \"f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8\"" Nov 23 23:02:05.649320 containerd[1555]: time="2025-11-23T23:02:05.649252442Z" level=info msg="connecting to shim f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8" address="unix:///run/containerd/s/d0fb1ba48ee97c2d941d4fac2efa4c3cafbaadb5700fe5d70db68afa66ad46fb" protocol=ttrpc version=3 Nov 23 23:02:05.709664 systemd[1]: Started cri-containerd-f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8.scope - libcontainer container f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8. Nov 23 23:02:05.823704 containerd[1555]: time="2025-11-23T23:02:05.819110074Z" level=info msg="StartContainer for \"f9a2426f66580165e11823e0bfe3850f90bfe3fcb7915d73f630504c7cecb0c8\" returns successfully" Nov 23 23:02:06.003497 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 23:02:06.003656 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 23:02:06.100787 kubelet[2752]: I1123 23:02:06.099560 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j4dls" podStartSLOduration=1.614552247 podStartE2EDuration="18.099540835s" podCreationTimestamp="2025-11-23 23:01:48 +0000 UTC" firstStartedPulling="2025-11-23 23:01:49.084526308 +0000 UTC m=+25.343107663" lastFinishedPulling="2025-11-23 23:02:05.569514896 +0000 UTC m=+41.828096251" observedRunningTime="2025-11-23 23:02:06.09852235 +0000 UTC m=+42.357103705" watchObservedRunningTime="2025-11-23 23:02:06.099540835 +0000 UTC m=+42.358122190" Nov 23 23:02:06.288397 kubelet[2752]: I1123 23:02:06.288348 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-backend-key-pair\") pod \"f1e9d56c-d386-4be6-909e-83d2bc375abf\" (UID: \"f1e9d56c-d386-4be6-909e-83d2bc375abf\") " Nov 23 23:02:06.288397 kubelet[2752]: I1123 23:02:06.288403 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztmvb\" (UniqueName: \"kubernetes.io/projected/f1e9d56c-d386-4be6-909e-83d2bc375abf-kube-api-access-ztmvb\") pod \"f1e9d56c-d386-4be6-909e-83d2bc375abf\" (UID: \"f1e9d56c-d386-4be6-909e-83d2bc375abf\") " Nov 23 23:02:06.289611 kubelet[2752]: I1123 23:02:06.288434 2752 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-ca-bundle\") pod \"f1e9d56c-d386-4be6-909e-83d2bc375abf\" (UID: \"f1e9d56c-d386-4be6-909e-83d2bc375abf\") " Nov 23 23:02:06.289611 kubelet[2752]: I1123 23:02:06.289023 2752 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f1e9d56c-d386-4be6-909e-83d2bc375abf" (UID: "f1e9d56c-d386-4be6-909e-83d2bc375abf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 23:02:06.293550 kubelet[2752]: I1123 23:02:06.293493 2752 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e9d56c-d386-4be6-909e-83d2bc375abf-kube-api-access-ztmvb" (OuterVolumeSpecName: "kube-api-access-ztmvb") pod "f1e9d56c-d386-4be6-909e-83d2bc375abf" (UID: "f1e9d56c-d386-4be6-909e-83d2bc375abf"). InnerVolumeSpecName "kube-api-access-ztmvb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 23:02:06.293550 kubelet[2752]: I1123 23:02:06.293506 2752 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f1e9d56c-d386-4be6-909e-83d2bc375abf" (UID: "f1e9d56c-d386-4be6-909e-83d2bc375abf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 23:02:06.389761 kubelet[2752]: I1123 23:02:06.389558 2752 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-backend-key-pair\") on node \"ci-4459-2-1-d-6a40a07c08\" DevicePath \"\"" Nov 23 23:02:06.389761 kubelet[2752]: I1123 23:02:06.389614 2752 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztmvb\" (UniqueName: \"kubernetes.io/projected/f1e9d56c-d386-4be6-909e-83d2bc375abf-kube-api-access-ztmvb\") on node \"ci-4459-2-1-d-6a40a07c08\" DevicePath \"\"" Nov 23 23:02:06.389761 kubelet[2752]: I1123 23:02:06.389641 2752 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1e9d56c-d386-4be6-909e-83d2bc375abf-whisker-ca-bundle\") on node \"ci-4459-2-1-d-6a40a07c08\" DevicePath \"\"" Nov 23 23:02:06.515893 kubelet[2752]: I1123 23:02:06.515593 2752 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 23:02:06.524207 systemd[1]: var-lib-kubelet-pods-f1e9d56c\x2dd386\x2d4be6\x2d909e\x2d83d2bc375abf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztmvb.mount: Deactivated successfully. Nov 23 23:02:06.525541 systemd[1]: var-lib-kubelet-pods-f1e9d56c\x2dd386\x2d4be6\x2d909e\x2d83d2bc375abf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 23:02:07.079403 systemd[1]: Removed slice kubepods-besteffort-podf1e9d56c_d386_4be6_909e_83d2bc375abf.slice - libcontainer container kubepods-besteffort-podf1e9d56c_d386_4be6_909e_83d2bc375abf.slice. Nov 23 23:02:07.167508 systemd[1]: Created slice kubepods-besteffort-podc3a04e07_75ec_47a7_ac40_5bddb6afbad1.slice - libcontainer container kubepods-besteffort-podc3a04e07_75ec_47a7_ac40_5bddb6afbad1.slice. Nov 23 23:02:07.196609 kubelet[2752]: I1123 23:02:07.196316 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3a04e07-75ec-47a7-ac40-5bddb6afbad1-whisker-ca-bundle\") pod \"whisker-5b9685c896-hmv6x\" (UID: \"c3a04e07-75ec-47a7-ac40-5bddb6afbad1\") " pod="calico-system/whisker-5b9685c896-hmv6x" Nov 23 23:02:07.197340 kubelet[2752]: I1123 23:02:07.196902 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3a04e07-75ec-47a7-ac40-5bddb6afbad1-whisker-backend-key-pair\") pod \"whisker-5b9685c896-hmv6x\" (UID: \"c3a04e07-75ec-47a7-ac40-5bddb6afbad1\") " pod="calico-system/whisker-5b9685c896-hmv6x" Nov 23 23:02:07.197340 kubelet[2752]: I1123 23:02:07.196936 2752 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwf2\" (UniqueName: \"kubernetes.io/projected/c3a04e07-75ec-47a7-ac40-5bddb6afbad1-kube-api-access-vqwf2\") pod \"whisker-5b9685c896-hmv6x\" (UID: \"c3a04e07-75ec-47a7-ac40-5bddb6afbad1\") " pod="calico-system/whisker-5b9685c896-hmv6x" Nov 23 23:02:07.474284 containerd[1555]: time="2025-11-23T23:02:07.473724160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b9685c896-hmv6x,Uid:c3a04e07-75ec-47a7-ac40-5bddb6afbad1,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:07.752747 systemd-networkd[1408]: cali73c5abcba43: Link UP Nov 23 23:02:07.754446 systemd-networkd[1408]: cali73c5abcba43: Gained carrier Nov 23 23:02:07.785684 containerd[1555]: 2025-11-23 23:02:07.519 [INFO][3876] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 23:02:07.785684 containerd[1555]: 2025-11-23 23:02:07.590 [INFO][3876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0 whisker-5b9685c896- calico-system c3a04e07-75ec-47a7-ac40-5bddb6afbad1 897 0 2025-11-23 23:02:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b9685c896 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 whisker-5b9685c896-hmv6x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali73c5abcba43 [] [] }} ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-" Nov 23 23:02:07.785684 containerd[1555]: 2025-11-23 23:02:07.591 [INFO][3876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.785684 containerd[1555]: 2025-11-23 23:02:07.665 [INFO][3934] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" HandleID="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Workload="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.665 [INFO][3934] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" HandleID="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Workload="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b78c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"whisker-5b9685c896-hmv6x", "timestamp":"2025-11-23 23:02:07.665517905 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.665 [INFO][3934] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.665 [INFO][3934] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.666 [INFO][3934] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.680 [INFO][3934] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.690 [INFO][3934] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.703 [INFO][3934] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.708 [INFO][3934] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.785980 containerd[1555]: 2025-11-23 23:02:07.711 [INFO][3934] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.711 [INFO][3934] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.714 [INFO][3934] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26 Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.726 [INFO][3934] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.734 [INFO][3934] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.129/26] block=192.168.124.128/26 handle="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.734 [INFO][3934] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.129/26] handle="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.734 [INFO][3934] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:07.787572 containerd[1555]: 2025-11-23 23:02:07.734 [INFO][3934] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.129/26] IPv6=[] ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" HandleID="k8s-pod-network.6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Workload="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.787775 containerd[1555]: 2025-11-23 23:02:07.740 [INFO][3876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0", GenerateName:"whisker-5b9685c896-", Namespace:"calico-system", SelfLink:"", UID:"c3a04e07-75ec-47a7-ac40-5bddb6afbad1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b9685c896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"whisker-5b9685c896-hmv6x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73c5abcba43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:07.787775 containerd[1555]: 2025-11-23 23:02:07.740 [INFO][3876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.129/32] ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.788179 containerd[1555]: 2025-11-23 23:02:07.740 [INFO][3876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73c5abcba43 ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.788179 containerd[1555]: 2025-11-23 23:02:07.756 [INFO][3876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.788273 containerd[1555]: 2025-11-23 23:02:07.757 [INFO][3876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0", GenerateName:"whisker-5b9685c896-", Namespace:"calico-system", SelfLink:"", UID:"c3a04e07-75ec-47a7-ac40-5bddb6afbad1", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b9685c896", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26", Pod:"whisker-5b9685c896-hmv6x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.124.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali73c5abcba43", MAC:"1a:39:56:0e:ac:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:07.788445 containerd[1555]: 2025-11-23 23:02:07.778 [INFO][3876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" Namespace="calico-system" Pod="whisker-5b9685c896-hmv6x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-whisker--5b9685c896--hmv6x-eth0" Nov 23 23:02:07.852559 containerd[1555]: time="2025-11-23T23:02:07.852502544Z" level=info msg="connecting to shim 6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26" address="unix:///run/containerd/s/ca31f0e3c6d020bd8421b629b1b6b6b6f732faea0c50004cba3e4aa326c1a6d4" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:07.882934 kubelet[2752]: I1123 23:02:07.882882 2752 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e9d56c-d386-4be6-909e-83d2bc375abf" path="/var/lib/kubelet/pods/f1e9d56c-d386-4be6-909e-83d2bc375abf/volumes" Nov 23 23:02:07.948693 systemd[1]: Started cri-containerd-6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26.scope - libcontainer container 6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26. Nov 23 23:02:08.031239 containerd[1555]: time="2025-11-23T23:02:08.030476649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b9685c896-hmv6x,Uid:c3a04e07-75ec-47a7-ac40-5bddb6afbad1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6f52a1954e3906487aa4ccb3529cff0b3e582592e3c327fc29b3b54ebb617c26\"" Nov 23 23:02:08.034467 containerd[1555]: time="2025-11-23T23:02:08.034424748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:02:08.392327 containerd[1555]: time="2025-11-23T23:02:08.392081589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:08.393968 containerd[1555]: time="2025-11-23T23:02:08.393837837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:02:08.393968 containerd[1555]: time="2025-11-23T23:02:08.393938078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:02:08.394399 kubelet[2752]: E1123 23:02:08.394276 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:08.394399 kubelet[2752]: E1123 23:02:08.394362 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:08.419545 kubelet[2752]: E1123 23:02:08.419336 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a76a4d8a680f4210a2a45574ec2b4759,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:08.424829 containerd[1555]: time="2025-11-23T23:02:08.424522465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:02:08.485272 systemd-networkd[1408]: vxlan.calico: Link UP Nov 23 23:02:08.485545 systemd-networkd[1408]: vxlan.calico: Gained carrier Nov 23 23:02:08.757098 containerd[1555]: time="2025-11-23T23:02:08.756902384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:08.760314 containerd[1555]: time="2025-11-23T23:02:08.759468557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:02:08.760314 containerd[1555]: time="2025-11-23T23:02:08.759502517Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:08.760720 kubelet[2752]: E1123 23:02:08.760679 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:08.760831 kubelet[2752]: E1123 23:02:08.760814 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:08.761031 kubelet[2752]: E1123 23:02:08.760986 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:08.762876 kubelet[2752]: E1123 23:02:08.762676 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:02:09.085693 kubelet[2752]: E1123 23:02:09.085634 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:02:09.430364 systemd-networkd[1408]: cali73c5abcba43: Gained IPv6LL Nov 23 23:02:09.750554 systemd-networkd[1408]: vxlan.calico: Gained IPv6LL Nov 23 23:02:10.871503 containerd[1555]: time="2025-11-23T23:02:10.871458992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pllfh,Uid:43989c61-ebed-4d18-99cf-851dcb1b5eb3,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:10.872495 containerd[1555]: time="2025-11-23T23:02:10.872434276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qcdmk,Uid:65c6ee75-f266-4d8e-9f91-7935bbe3f792,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:10.873135 containerd[1555]: time="2025-11-23T23:02:10.873041759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-6nxhq,Uid:3b014a55-de73-4ac9-9e35-2cc72ed4bcca,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:02:11.171306 systemd-networkd[1408]: cali06a16c1e1bf: Link UP Nov 23 23:02:11.171539 systemd-networkd[1408]: cali06a16c1e1bf: Gained carrier Nov 23 23:02:11.204666 containerd[1555]: 2025-11-23 23:02:10.979 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0 calico-apiserver-5b9f97f6d6- calico-apiserver 3b014a55-de73-4ac9-9e35-2cc72ed4bcca 819 0 2025-11-23 23:01:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9f97f6d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 calico-apiserver-5b9f97f6d6-6nxhq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali06a16c1e1bf [] [] }} ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-" Nov 23 23:02:11.204666 containerd[1555]: 2025-11-23 23:02:10.979 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.204666 containerd[1555]: 2025-11-23 23:02:11.058 [INFO][4212] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" HandleID="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.059 [INFO][4212] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" HandleID="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024ba40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"calico-apiserver-5b9f97f6d6-6nxhq", "timestamp":"2025-11-23 23:02:11.058579928 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.059 [INFO][4212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.060 [INFO][4212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.060 [INFO][4212] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.083 [INFO][4212] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.095 [INFO][4212] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.116 [INFO][4212] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.120 [INFO][4212] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.204890 containerd[1555]: 2025-11-23 23:02:11.127 [INFO][4212] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.127 [INFO][4212] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.131 [INFO][4212] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1 Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.146 [INFO][4212] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.158 [INFO][4212] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.130/26] block=192.168.124.128/26 handle="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.159 [INFO][4212] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.130/26] handle="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.159 [INFO][4212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:11.205081 containerd[1555]: 2025-11-23 23:02:11.159 [INFO][4212] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.130/26] IPv6=[] ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" HandleID="k8s-pod-network.5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.205211 containerd[1555]: 2025-11-23 23:02:11.162 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0", GenerateName:"calico-apiserver-5b9f97f6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b014a55-de73-4ac9-9e35-2cc72ed4bcca", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f97f6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"calico-apiserver-5b9f97f6d6-6nxhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06a16c1e1bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.205259 containerd[1555]: 2025-11-23 23:02:11.162 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.130/32] ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.205259 containerd[1555]: 2025-11-23 23:02:11.162 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali06a16c1e1bf ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.205259 containerd[1555]: 2025-11-23 23:02:11.164 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.205355 containerd[1555]: 2025-11-23 23:02:11.176 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0", GenerateName:"calico-apiserver-5b9f97f6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b014a55-de73-4ac9-9e35-2cc72ed4bcca", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f97f6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1", Pod:"calico-apiserver-5b9f97f6d6-6nxhq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali06a16c1e1bf", MAC:"2e:52:51:51:41:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.205407 containerd[1555]: 2025-11-23 23:02:11.199 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-6nxhq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--6nxhq-eth0" Nov 23 23:02:11.261106 containerd[1555]: time="2025-11-23T23:02:11.260937731Z" level=info msg="connecting to shim 5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1" address="unix:///run/containerd/s/bdf357e558813552e6b46231a8ff8daf7d9030cea478e4711ffb3b16e650d685" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:11.283785 systemd-networkd[1408]: califb1cc11bd6d: Link UP Nov 23 23:02:11.287068 systemd-networkd[1408]: califb1cc11bd6d: Gained carrier Nov 23 23:02:11.335783 systemd[1]: Started cri-containerd-5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1.scope - libcontainer container 5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1. Nov 23 23:02:11.347430 containerd[1555]: 2025-11-23 23:02:10.984 [INFO][4172] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0 coredns-668d6bf9bc- kube-system 43989c61-ebed-4d18-99cf-851dcb1b5eb3 822 0 2025-11-23 23:01:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 coredns-668d6bf9bc-pllfh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califb1cc11bd6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-" Nov 23 23:02:11.347430 containerd[1555]: 2025-11-23 23:02:10.984 [INFO][4172] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.347430 containerd[1555]: 2025-11-23 23:02:11.059 [INFO][4217] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" HandleID="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Workload="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.063 [INFO][4217] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" HandleID="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Workload="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bb5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"coredns-668d6bf9bc-pllfh", "timestamp":"2025-11-23 23:02:11.059264771 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.063 [INFO][4217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.159 [INFO][4217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.159 [INFO][4217] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.196 [INFO][4217] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.209 [INFO][4217] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.220 [INFO][4217] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.224 [INFO][4217] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.347763 containerd[1555]: 2025-11-23 23:02:11.232 [INFO][4217] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.232 [INFO][4217] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.242 [INFO][4217] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174 Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.254 [INFO][4217] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.265 [INFO][4217] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.131/26] block=192.168.124.128/26 handle="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.267 [INFO][4217] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.131/26] handle="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.267 [INFO][4217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:11.348031 containerd[1555]: 2025-11-23 23:02:11.267 [INFO][4217] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.131/26] IPv6=[] ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" HandleID="k8s-pod-network.e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Workload="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.348182 containerd[1555]: 2025-11-23 23:02:11.275 [INFO][4172] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"43989c61-ebed-4d18-99cf-851dcb1b5eb3", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"coredns-668d6bf9bc-pllfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb1cc11bd6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.348182 containerd[1555]: 2025-11-23 23:02:11.276 [INFO][4172] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.131/32] ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.348182 containerd[1555]: 2025-11-23 23:02:11.276 [INFO][4172] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb1cc11bd6d ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.348182 containerd[1555]: 2025-11-23 23:02:11.290 [INFO][4172] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.348182 containerd[1555]: 2025-11-23 23:02:11.295 [INFO][4172] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"43989c61-ebed-4d18-99cf-851dcb1b5eb3", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174", Pod:"coredns-668d6bf9bc-pllfh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califb1cc11bd6d", MAC:"2a:68:fe:e1:73:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.348182 containerd[1555]: 2025-11-23 23:02:11.337 [INFO][4172] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" Namespace="kube-system" Pod="coredns-668d6bf9bc-pllfh" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--pllfh-eth0" Nov 23 23:02:11.397044 containerd[1555]: time="2025-11-23T23:02:11.396990870Z" level=info msg="connecting to shim e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174" address="unix:///run/containerd/s/f125b62f18d8ab199b70f658263fdf026b6e2bba729330251d2b5d8eba11dda8" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:11.409838 systemd-networkd[1408]: calid297d11aa29: Link UP Nov 23 23:02:11.411633 systemd-networkd[1408]: calid297d11aa29: Gained carrier Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:10.979 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0 csi-node-driver- calico-system 65c6ee75-f266-4d8e-9f91-7935bbe3f792 721 0 2025-11-23 23:01:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 csi-node-driver-qcdmk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid297d11aa29 [] [] }} ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:10.979 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.061 [INFO][4210] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" HandleID="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Workload="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.063 [INFO][4210] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" HandleID="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Workload="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3f40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"csi-node-driver-qcdmk", "timestamp":"2025-11-23 23:02:11.061835421 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.063 [INFO][4210] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.267 [INFO][4210] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.268 [INFO][4210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.292 [INFO][4210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.320 [INFO][4210] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.349 [INFO][4210] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.358 [INFO][4210] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.362 [INFO][4210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.363 [INFO][4210] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.366 [INFO][4210] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.376 [INFO][4210] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.391 [INFO][4210] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.132/26] block=192.168.124.128/26 handle="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.391 [INFO][4210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.132/26] handle="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.391 [INFO][4210] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:11.450549 containerd[1555]: 2025-11-23 23:02:11.391 [INFO][4210] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.132/26] IPv6=[] ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" HandleID="k8s-pod-network.8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Workload="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.451115 containerd[1555]: 2025-11-23 23:02:11.399 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"65c6ee75-f266-4d8e-9f91-7935bbe3f792", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"csi-node-driver-qcdmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid297d11aa29", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.451115 containerd[1555]: 2025-11-23 23:02:11.400 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.132/32] ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.451115 containerd[1555]: 2025-11-23 23:02:11.400 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid297d11aa29 ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.451115 containerd[1555]: 2025-11-23 23:02:11.420 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.451115 containerd[1555]: 2025-11-23 23:02:11.421 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"65c6ee75-f266-4d8e-9f91-7935bbe3f792", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d", Pod:"csi-node-driver-qcdmk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid297d11aa29", MAC:"02:d3:ff:60:24:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:11.451115 containerd[1555]: 2025-11-23 23:02:11.437 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" Namespace="calico-system" Pod="csi-node-driver-qcdmk" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-csi--node--driver--qcdmk-eth0" Nov 23 23:02:11.464656 systemd[1]: Started cri-containerd-e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174.scope - libcontainer container e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174. Nov 23 23:02:11.515750 containerd[1555]: time="2025-11-23T23:02:11.515263899Z" level=info msg="connecting to shim 8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d" address="unix:///run/containerd/s/429a913dbc88d5ec00c3b85d8eb7c48c772db5035d88e9962eb9e78f32342910" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:11.518316 containerd[1555]: time="2025-11-23T23:02:11.518211351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-6nxhq,Uid:3b014a55-de73-4ac9-9e35-2cc72ed4bcca,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5933c0f404373b5b097b192c97f8e4a5b12f551b189bfb66bad211a8c684f3a1\"" Nov 23 23:02:11.526242 containerd[1555]: time="2025-11-23T23:02:11.525962462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:11.553887 containerd[1555]: time="2025-11-23T23:02:11.553843932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pllfh,Uid:43989c61-ebed-4d18-99cf-851dcb1b5eb3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174\"" Nov 23 23:02:11.561241 containerd[1555]: time="2025-11-23T23:02:11.561174841Z" level=info msg="CreateContainer within sandbox \"e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:02:11.563835 systemd[1]: Started cri-containerd-8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d.scope - libcontainer container 8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d. Nov 23 23:02:11.578145 containerd[1555]: time="2025-11-23T23:02:11.578091068Z" level=info msg="Container ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:11.586864 containerd[1555]: time="2025-11-23T23:02:11.585915219Z" level=info msg="CreateContainer within sandbox \"e0ca095fc8aeae65c16fa69176e80462b76b77fb42a3580f69c128b49e112174\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e\"" Nov 23 23:02:11.588028 containerd[1555]: time="2025-11-23T23:02:11.587650986Z" level=info msg="StartContainer for \"ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e\"" Nov 23 23:02:11.590083 containerd[1555]: time="2025-11-23T23:02:11.590046156Z" level=info msg="connecting to shim ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e" address="unix:///run/containerd/s/f125b62f18d8ab199b70f658263fdf026b6e2bba729330251d2b5d8eba11dda8" protocol=ttrpc version=3 Nov 23 23:02:11.621502 systemd[1]: Started cri-containerd-ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e.scope - libcontainer container ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e. Nov 23 23:02:11.647893 containerd[1555]: time="2025-11-23T23:02:11.647597544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qcdmk,Uid:65c6ee75-f266-4d8e-9f91-7935bbe3f792,Namespace:calico-system,Attempt:0,} returns sandbox id \"8180096f4a5cce46b24d28c10124958ac190cf8bc9bef84eb3dad4e89e02873d\"" Nov 23 23:02:11.698607 containerd[1555]: time="2025-11-23T23:02:11.698555666Z" level=info msg="StartContainer for \"ef956332b0b5445823980480853a5f7d4f06eaca89b3c47f0e447a0b55908d4e\" returns successfully" Nov 23 23:02:11.854982 containerd[1555]: time="2025-11-23T23:02:11.854919566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:11.857502 containerd[1555]: time="2025-11-23T23:02:11.857430536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:11.857987 containerd[1555]: time="2025-11-23T23:02:11.857733137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:11.858494 kubelet[2752]: E1123 23:02:11.858424 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:11.859778 kubelet[2752]: E1123 23:02:11.858652 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:11.859778 kubelet[2752]: E1123 23:02:11.859696 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rsnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:11.861161 kubelet[2752]: E1123 23:02:11.860889 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:11.861782 containerd[1555]: time="2025-11-23T23:02:11.861593352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:11.873719 containerd[1555]: time="2025-11-23T23:02:11.873257079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xms8x,Uid:03f9e6e9-b3a5-4fb3-a283-2563920974fa,Namespace:kube-system,Attempt:0,}" Nov 23 23:02:11.875084 containerd[1555]: time="2025-11-23T23:02:11.874360803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56947b74b7-c65fq,Uid:c25375d2-2332-49bd-a8e3-61dfcb956c34,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:12.074894 systemd-networkd[1408]: cali7bcb08c2679: Link UP Nov 23 23:02:12.075966 systemd-networkd[1408]: cali7bcb08c2679: Gained carrier Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:11.957 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0 coredns-668d6bf9bc- kube-system 03f9e6e9-b3a5-4fb3-a283-2563920974fa 808 0 2025-11-23 23:01:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 coredns-668d6bf9bc-xms8x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bcb08c2679 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:11.957 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.007 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" HandleID="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Workload="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.007 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" HandleID="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Workload="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"coredns-668d6bf9bc-xms8x", "timestamp":"2025-11-23 23:02:12.007413729 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.007 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.007 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.007 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.022 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.030 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.037 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.040 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.043 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.043 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.045 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.052 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.062 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.133/26] block=192.168.124.128/26 handle="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.062 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.133/26] handle="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.062 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:12.105306 containerd[1555]: 2025-11-23 23:02:12.062 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.133/26] IPv6=[] ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" HandleID="k8s-pod-network.a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Workload="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.108511 containerd[1555]: 2025-11-23 23:02:12.067 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03f9e6e9-b3a5-4fb3-a283-2563920974fa", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"coredns-668d6bf9bc-xms8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bcb08c2679", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.108511 containerd[1555]: 2025-11-23 23:02:12.068 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.133/32] ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.108511 containerd[1555]: 2025-11-23 23:02:12.068 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bcb08c2679 ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.108511 containerd[1555]: 2025-11-23 23:02:12.075 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.108511 containerd[1555]: 2025-11-23 23:02:12.077 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"03f9e6e9-b3a5-4fb3-a283-2563920974fa", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd", Pod:"coredns-668d6bf9bc-xms8x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.124.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bcb08c2679", MAC:"02:0a:58:c0:af:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.108511 containerd[1555]: 2025-11-23 23:02:12.097 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" Namespace="kube-system" Pod="coredns-668d6bf9bc-xms8x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-coredns--668d6bf9bc--xms8x-eth0" Nov 23 23:02:12.116565 kubelet[2752]: E1123 23:02:12.114544 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:12.130284 kubelet[2752]: I1123 23:02:12.129879 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pllfh" podStartSLOduration=43.129859544 podStartE2EDuration="43.129859544s" podCreationTimestamp="2025-11-23 23:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:12.129791144 +0000 UTC m=+48.388372499" watchObservedRunningTime="2025-11-23 23:02:12.129859544 +0000 UTC m=+48.388440859" Nov 23 23:02:12.170525 containerd[1555]: time="2025-11-23T23:02:12.170457935Z" level=info msg="connecting to shim a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd" address="unix:///run/containerd/s/7055a2c81e0965b8235fa6f89daa77ddd17aeeae08b023bc26cbd33b06a09641" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:12.213627 containerd[1555]: time="2025-11-23T23:02:12.213443815Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:12.221379 containerd[1555]: time="2025-11-23T23:02:12.221121123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:12.222179 containerd[1555]: time="2025-11-23T23:02:12.221168164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:12.222748 kubelet[2752]: E1123 23:02:12.222334 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:12.222748 kubelet[2752]: E1123 23:02:12.222384 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:12.222748 kubelet[2752]: E1123 23:02:12.222560 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:12.226598 containerd[1555]: time="2025-11-23T23:02:12.226423743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:12.246763 systemd[1]: Started cri-containerd-a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd.scope - libcontainer container a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd. Nov 23 23:02:12.262467 systemd-networkd[1408]: cali7c668aa53a0: Link UP Nov 23 23:02:12.262717 systemd-networkd[1408]: cali7c668aa53a0: Gained carrier Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:11.980 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0 calico-kube-controllers-56947b74b7- calico-system c25375d2-2332-49bd-a8e3-61dfcb956c34 811 0 2025-11-23 23:01:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56947b74b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 calico-kube-controllers-56947b74b7-c65fq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c668aa53a0 [] [] }} ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:11.981 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.029 [INFO][4461] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" HandleID="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.029 [INFO][4461] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" HandleID="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"calico-kube-controllers-56947b74b7-c65fq", "timestamp":"2025-11-23 23:02:12.02910981 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.029 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.062 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.063 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.124 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.139 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.163 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.177 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.199 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.200 [INFO][4461] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.205 [INFO][4461] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4 Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.232 [INFO][4461] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.253 [INFO][4461] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.134/26] block=192.168.124.128/26 handle="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.253 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.134/26] handle="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.253 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:12.293865 containerd[1555]: 2025-11-23 23:02:12.253 [INFO][4461] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.134/26] IPv6=[] ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" HandleID="k8s-pod-network.ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.295023 containerd[1555]: 2025-11-23 23:02:12.256 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0", GenerateName:"calico-kube-controllers-56947b74b7-", Namespace:"calico-system", SelfLink:"", UID:"c25375d2-2332-49bd-a8e3-61dfcb956c34", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56947b74b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"calico-kube-controllers-56947b74b7-c65fq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c668aa53a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.295023 containerd[1555]: 2025-11-23 23:02:12.258 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.134/32] ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.295023 containerd[1555]: 2025-11-23 23:02:12.258 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c668aa53a0 ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.295023 containerd[1555]: 2025-11-23 23:02:12.261 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.295023 containerd[1555]: 2025-11-23 23:02:12.262 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0", GenerateName:"calico-kube-controllers-56947b74b7-", Namespace:"calico-system", SelfLink:"", UID:"c25375d2-2332-49bd-a8e3-61dfcb956c34", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56947b74b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4", Pod:"calico-kube-controllers-56947b74b7-c65fq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.124.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c668aa53a0", MAC:"fa:81:ee:87:4e:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:12.295023 containerd[1555]: 2025-11-23 23:02:12.287 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" Namespace="calico-system" Pod="calico-kube-controllers-56947b74b7-c65fq" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--kube--controllers--56947b74b7--c65fq-eth0" Nov 23 23:02:12.337906 containerd[1555]: time="2025-11-23T23:02:12.337848757Z" level=info msg="connecting to shim ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4" address="unix:///run/containerd/s/c8c0c74d0494625ff7d70456dd950ec751b0577bf8b6e7b4d66b936f85d2ab89" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:12.339318 containerd[1555]: time="2025-11-23T23:02:12.339204162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xms8x,Uid:03f9e6e9-b3a5-4fb3-a283-2563920974fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd\"" Nov 23 23:02:12.345195 containerd[1555]: time="2025-11-23T23:02:12.345030864Z" level=info msg="CreateContainer within sandbox \"a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 23:02:12.366011 containerd[1555]: time="2025-11-23T23:02:12.364322296Z" level=info msg="Container 5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:02:12.367535 systemd[1]: Started cri-containerd-ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4.scope - libcontainer container ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4. Nov 23 23:02:12.375942 containerd[1555]: time="2025-11-23T23:02:12.375824539Z" level=info msg="CreateContainer within sandbox \"a2bb401cabb68ce247d394304755c81b07ed8091b203382ea67be3be985cd2dd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191\"" Nov 23 23:02:12.378363 containerd[1555]: time="2025-11-23T23:02:12.377426224Z" level=info msg="StartContainer for \"5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191\"" Nov 23 23:02:12.378867 containerd[1555]: time="2025-11-23T23:02:12.378746869Z" level=info msg="connecting to shim 5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191" address="unix:///run/containerd/s/7055a2c81e0965b8235fa6f89daa77ddd17aeeae08b023bc26cbd33b06a09641" protocol=ttrpc version=3 Nov 23 23:02:12.408786 systemd[1]: Started cri-containerd-5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191.scope - libcontainer container 5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191. Nov 23 23:02:12.459409 containerd[1555]: time="2025-11-23T23:02:12.459354369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56947b74b7-c65fq,Uid:c25375d2-2332-49bd-a8e3-61dfcb956c34,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecce1e49d4ab0cb8c35923b27af706e24c9748a2ecfb931ed9c43be0c677a5d4\"" Nov 23 23:02:12.515060 containerd[1555]: time="2025-11-23T23:02:12.515001736Z" level=info msg="StartContainer for \"5cca972a86482422f1418e4b39b4d5a6f304015f0d7257d3a1b3f8400d318191\" returns successfully" Nov 23 23:02:12.565497 systemd-networkd[1408]: cali06a16c1e1bf: Gained IPv6LL Nov 23 23:02:12.566113 systemd-networkd[1408]: califb1cc11bd6d: Gained IPv6LL Nov 23 23:02:12.569742 containerd[1555]: time="2025-11-23T23:02:12.569412458Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:12.576171 containerd[1555]: time="2025-11-23T23:02:12.576082803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:12.576520 containerd[1555]: time="2025-11-23T23:02:12.576121563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:12.576888 kubelet[2752]: E1123 23:02:12.576831 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:12.577028 kubelet[2752]: E1123 23:02:12.576910 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:12.577144 kubelet[2752]: E1123 23:02:12.577098 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:12.579644 kubelet[2752]: E1123 23:02:12.579202 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:02:12.580351 containerd[1555]: time="2025-11-23T23:02:12.580308059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:02:12.887443 systemd-networkd[1408]: calid297d11aa29: Gained IPv6LL Nov 23 23:02:12.925801 containerd[1555]: time="2025-11-23T23:02:12.925714143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:12.928964 containerd[1555]: time="2025-11-23T23:02:12.928874714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:02:12.929420 containerd[1555]: time="2025-11-23T23:02:12.928915474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:12.929501 kubelet[2752]: E1123 23:02:12.929443 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:12.929905 kubelet[2752]: E1123 23:02:12.929506 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:12.929905 kubelet[2752]: E1123 23:02:12.929678 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56947b74b7-c65fq_calico-system(c25375d2-2332-49bd-a8e3-61dfcb956c34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:12.931348 kubelet[2752]: E1123 23:02:12.931253 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:02:13.124150 kubelet[2752]: E1123 23:02:13.123791 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:02:13.134524 kubelet[2752]: E1123 23:02:13.134106 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:13.135179 kubelet[2752]: E1123 23:02:13.135112 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:02:13.216603 kubelet[2752]: I1123 23:02:13.216460 2752 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xms8x" podStartSLOduration=44.216150692 podStartE2EDuration="44.216150692s" podCreationTimestamp="2025-11-23 23:01:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 23:02:13.193230372 +0000 UTC m=+49.451811727" watchObservedRunningTime="2025-11-23 23:02:13.216150692 +0000 UTC m=+49.474732047" Nov 23 23:02:13.398444 systemd-networkd[1408]: cali7c668aa53a0: Gained IPv6LL Nov 23 23:02:13.870099 containerd[1555]: time="2025-11-23T23:02:13.870011371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7f78c7-d8v97,Uid:1e722cd7-3fb4-43d9-b64b-32096b2087bd,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:02:13.870440 containerd[1555]: time="2025-11-23T23:02:13.870029171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-lkrpd,Uid:8e7fff62-849b-430a-8c5a-7b0e171a5c60,Namespace:calico-apiserver,Attempt:0,}" Nov 23 23:02:13.910449 systemd-networkd[1408]: cali7bcb08c2679: Gained IPv6LL Nov 23 23:02:14.062040 systemd-networkd[1408]: calife5eb45aa2e: Link UP Nov 23 23:02:14.064431 systemd-networkd[1408]: calife5eb45aa2e: Gained carrier Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.925 [INFO][4631] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0 calico-apiserver-5b9f97f6d6- calico-apiserver 8e7fff62-849b-430a-8c5a-7b0e171a5c60 814 0 2025-11-23 23:01:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b9f97f6d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 calico-apiserver-5b9f97f6d6-lkrpd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calife5eb45aa2e [] [] }} ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.925 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.980 [INFO][4655] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" HandleID="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.981 [INFO][4655] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" HandleID="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"calico-apiserver-5b9f97f6d6-lkrpd", "timestamp":"2025-11-23 23:02:13.980848957 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.981 [INFO][4655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.981 [INFO][4655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.981 [INFO][4655] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:13.995 [INFO][4655] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.008 [INFO][4655] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.014 [INFO][4655] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.017 [INFO][4655] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.021 [INFO][4655] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.021 [INFO][4655] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.023 [INFO][4655] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.029 [INFO][4655] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.040 [INFO][4655] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.135/26] block=192.168.124.128/26 handle="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.040 [INFO][4655] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.135/26] handle="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.041 [INFO][4655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:14.101817 containerd[1555]: 2025-11-23 23:02:14.041 [INFO][4655] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.135/26] IPv6=[] ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" HandleID="k8s-pod-network.321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.105008 containerd[1555]: 2025-11-23 23:02:14.045 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0", GenerateName:"calico-apiserver-5b9f97f6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e7fff62-849b-430a-8c5a-7b0e171a5c60", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f97f6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"calico-apiserver-5b9f97f6d6-lkrpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife5eb45aa2e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:14.105008 containerd[1555]: 2025-11-23 23:02:14.045 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.135/32] ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.105008 containerd[1555]: 2025-11-23 23:02:14.045 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife5eb45aa2e ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.105008 containerd[1555]: 2025-11-23 23:02:14.065 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.105008 containerd[1555]: 2025-11-23 23:02:14.067 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0", GenerateName:"calico-apiserver-5b9f97f6d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e7fff62-849b-430a-8c5a-7b0e171a5c60", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b9f97f6d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a", Pod:"calico-apiserver-5b9f97f6d6-lkrpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calife5eb45aa2e", MAC:"2e:d8:56:78:eb:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:14.105008 containerd[1555]: 2025-11-23 23:02:14.092 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" Namespace="calico-apiserver" Pod="calico-apiserver-5b9f97f6d6-lkrpd" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5b9f97f6d6--lkrpd-eth0" Nov 23 23:02:14.143826 kubelet[2752]: E1123 23:02:14.142414 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:02:14.156460 containerd[1555]: time="2025-11-23T23:02:14.156388575Z" level=info msg="connecting to shim 321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a" address="unix:///run/containerd/s/500ff819c30fa923c5960f88d11b6843a393182da9086aaaf671a3e00a8ed0a7" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:14.199087 systemd[1]: Started cri-containerd-321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a.scope - libcontainer container 321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a. Nov 23 23:02:14.230494 systemd-networkd[1408]: cali83b34cc8218: Link UP Nov 23 23:02:14.234599 systemd-networkd[1408]: cali83b34cc8218: Gained carrier Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:13.958 [INFO][4630] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0 calico-apiserver-5f7f78c7- calico-apiserver 1e722cd7-3fb4-43d9-b64b-32096b2087bd 817 0 2025-11-23 23:01:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f7f78c7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 calico-apiserver-5f7f78c7-d8v97 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83b34cc8218 [] [] }} ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:13.958 [INFO][4630] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.003 [INFO][4661] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" HandleID="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.003 [INFO][4661] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" HandleID="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"calico-apiserver-5f7f78c7-d8v97", "timestamp":"2025-11-23 23:02:14.003368755 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.003 [INFO][4661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.041 [INFO][4661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.041 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.106 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.120 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.142 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.152 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.172 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.172 [INFO][4661] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.184 [INFO][4661] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84 Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.197 [INFO][4661] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.216 [INFO][4661] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.136/26] block=192.168.124.128/26 handle="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.216 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.136/26] handle="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.216 [INFO][4661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:14.262578 containerd[1555]: 2025-11-23 23:02:14.216 [INFO][4661] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.136/26] IPv6=[] ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" HandleID="k8s-pod-network.a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Workload="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.263523 containerd[1555]: 2025-11-23 23:02:14.220 [INFO][4630] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0", GenerateName:"calico-apiserver-5f7f78c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e722cd7-3fb4-43d9-b64b-32096b2087bd", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7f78c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"calico-apiserver-5f7f78c7-d8v97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83b34cc8218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:14.263523 containerd[1555]: 2025-11-23 23:02:14.221 [INFO][4630] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.136/32] ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.263523 containerd[1555]: 2025-11-23 23:02:14.221 [INFO][4630] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83b34cc8218 ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.263523 containerd[1555]: 2025-11-23 23:02:14.235 [INFO][4630] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.263523 containerd[1555]: 2025-11-23 23:02:14.238 [INFO][4630] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0", GenerateName:"calico-apiserver-5f7f78c7-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e722cd7-3fb4-43d9-b64b-32096b2087bd", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f7f78c7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84", Pod:"calico-apiserver-5f7f78c7-d8v97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.124.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83b34cc8218", MAC:"42:2e:05:ba:84:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:14.263523 containerd[1555]: 2025-11-23 23:02:14.258 [INFO][4630] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" Namespace="calico-apiserver" Pod="calico-apiserver-5f7f78c7-d8v97" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-calico--apiserver--5f7f78c7--d8v97-eth0" Nov 23 23:02:14.301835 containerd[1555]: time="2025-11-23T23:02:14.301656890Z" level=info msg="connecting to shim a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84" address="unix:///run/containerd/s/4d281d84f2bb955aca5f3dd7d06fe1334462e26f678df4d1c8484b20c17bfaba" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:14.362522 systemd[1]: Started cri-containerd-a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84.scope - libcontainer container a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84. Nov 23 23:02:14.367834 containerd[1555]: time="2025-11-23T23:02:14.367552145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b9f97f6d6-lkrpd,Uid:8e7fff62-849b-430a-8c5a-7b0e171a5c60,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"321dff19801eb6a12a2c0fc06095398aa214dcc5257e05f59f0eaf41ac73192a\"" Nov 23 23:02:14.371413 containerd[1555]: time="2025-11-23T23:02:14.370666395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:14.427265 containerd[1555]: time="2025-11-23T23:02:14.426871739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f7f78c7-d8v97,Uid:1e722cd7-3fb4-43d9-b64b-32096b2087bd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a52df7d6048d60008f581c19d1215d46bc6694a39662dc1c8b70ec03efda4d84\"" Nov 23 23:02:14.753027 containerd[1555]: time="2025-11-23T23:02:14.752806764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:14.754206 containerd[1555]: time="2025-11-23T23:02:14.754158408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:14.754342 containerd[1555]: time="2025-11-23T23:02:14.754258129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:14.754713 kubelet[2752]: E1123 23:02:14.754590 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:14.754713 kubelet[2752]: E1123 23:02:14.754674 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:14.754996 kubelet[2752]: E1123 23:02:14.754924 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-lkrpd_calico-apiserver(8e7fff62-849b-430a-8c5a-7b0e171a5c60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:14.755736 containerd[1555]: time="2025-11-23T23:02:14.755575573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:14.756678 kubelet[2752]: E1123 23:02:14.756492 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:02:14.871246 containerd[1555]: time="2025-11-23T23:02:14.871144670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msv7x,Uid:0f266d4c-4f00-43ea-b251-4bdc9532cfcf,Namespace:calico-system,Attempt:0,}" Nov 23 23:02:15.037320 systemd-networkd[1408]: cali1bb9372eee6: Link UP Nov 23 23:02:15.038157 systemd-networkd[1408]: cali1bb9372eee6: Gained carrier Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.928 [INFO][4779] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0 goldmane-666569f655- calico-system 0f266d4c-4f00-43ea-b251-4bdc9532cfcf 821 0 2025-11-23 23:01:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-1-d-6a40a07c08 goldmane-666569f655-msv7x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1bb9372eee6 [] [] }} ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.928 [INFO][4779] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.969 [INFO][4791] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" HandleID="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Workload="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.969 [INFO][4791] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" HandleID="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Workload="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-1-d-6a40a07c08", "pod":"goldmane-666569f655-msv7x", "timestamp":"2025-11-23 23:02:14.969620872 +0000 UTC"}, Hostname:"ci-4459-2-1-d-6a40a07c08", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.969 [INFO][4791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.969 [INFO][4791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.969 [INFO][4791] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-1-d-6a40a07c08' Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.983 [INFO][4791] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.991 [INFO][4791] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:14.999 [INFO][4791] ipam/ipam.go 511: Trying affinity for 192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.005 [INFO][4791] ipam/ipam.go 158: Attempting to load block cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.009 [INFO][4791] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.124.128/26 host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.009 [INFO][4791] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.124.128/26 handle="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.014 [INFO][4791] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99 Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.019 [INFO][4791] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.124.128/26 handle="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.029 [INFO][4791] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.124.137/26] block=192.168.124.128/26 handle="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.029 [INFO][4791] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.124.137/26] handle="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" host="ci-4459-2-1-d-6a40a07c08" Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.029 [INFO][4791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 23:02:15.058841 containerd[1555]: 2025-11-23 23:02:15.029 [INFO][4791] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.124.137/26] IPv6=[] ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" HandleID="k8s-pod-network.60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Workload="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.060736 containerd[1555]: 2025-11-23 23:02:15.032 [INFO][4779] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0f266d4c-4f00-43ea-b251-4bdc9532cfcf", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"", Pod:"goldmane-666569f655-msv7x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1bb9372eee6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:15.060736 containerd[1555]: 2025-11-23 23:02:15.032 [INFO][4779] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.124.137/32] ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.060736 containerd[1555]: 2025-11-23 23:02:15.033 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1bb9372eee6 ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.060736 containerd[1555]: 2025-11-23 23:02:15.039 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.060736 containerd[1555]: 2025-11-23 23:02:15.040 [INFO][4779] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0f266d4c-4f00-43ea-b251-4bdc9532cfcf", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 23, 1, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-1-d-6a40a07c08", ContainerID:"60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99", Pod:"goldmane-666569f655-msv7x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.124.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1bb9372eee6", MAC:"a6:f6:db:2d:39:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 23:02:15.060736 containerd[1555]: 2025-11-23 23:02:15.053 [INFO][4779] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" Namespace="calico-system" Pod="goldmane-666569f655-msv7x" WorkloadEndpoint="ci--4459--2--1--d--6a40a07c08-k8s-goldmane--666569f655--msv7x-eth0" Nov 23 23:02:15.098474 containerd[1555]: time="2025-11-23T23:02:15.098283833Z" level=info msg="connecting to shim 60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99" address="unix:///run/containerd/s/02cb4cc41095ac5d9eb93a8afcff84f903a7c4520649857e4a32406e3316dbfc" namespace=k8s.io protocol=ttrpc version=3 Nov 23 23:02:15.099115 containerd[1555]: time="2025-11-23T23:02:15.099039915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:15.104096 containerd[1555]: time="2025-11-23T23:02:15.104051290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:15.105210 containerd[1555]: time="2025-11-23T23:02:15.104126251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:15.105903 kubelet[2752]: E1123 23:02:15.105504 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:15.106244 kubelet[2752]: E1123 23:02:15.105994 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:15.107849 kubelet[2752]: E1123 23:02:15.107629 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8dqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f7f78c7-d8v97_calico-apiserver(1e722cd7-3fb4-43d9-b64b-32096b2087bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:15.109276 kubelet[2752]: E1123 23:02:15.108805 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:02:15.127005 systemd-networkd[1408]: calife5eb45aa2e: Gained IPv6LL Nov 23 23:02:15.160906 kubelet[2752]: E1123 23:02:15.160593 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:02:15.168543 systemd[1]: Started cri-containerd-60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99.scope - libcontainer container 60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99. Nov 23 23:02:15.181723 kubelet[2752]: E1123 23:02:15.181664 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:02:15.285879 containerd[1555]: time="2025-11-23T23:02:15.285806167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-msv7x,Uid:0f266d4c-4f00-43ea-b251-4bdc9532cfcf,Namespace:calico-system,Attempt:0,} returns sandbox id \"60bf49ee7130ab2c606a52bc61cbb9b7881e79adfa40092340364f25a4c70e99\"" Nov 23 23:02:15.289900 containerd[1555]: time="2025-11-23T23:02:15.289775139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:02:15.633357 containerd[1555]: time="2025-11-23T23:02:15.633267351Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:15.635548 containerd[1555]: time="2025-11-23T23:02:15.635380798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:02:15.635920 containerd[1555]: time="2025-11-23T23:02:15.635457878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:15.635975 kubelet[2752]: E1123 23:02:15.635803 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:15.635975 kubelet[2752]: E1123 23:02:15.635852 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:15.636187 kubelet[2752]: E1123 23:02:15.636062 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msv7x_calico-system(0f266d4c-4f00-43ea-b251-4bdc9532cfcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:15.637518 kubelet[2752]: E1123 23:02:15.637428 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:02:16.183559 kubelet[2752]: E1123 23:02:16.183502 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:02:16.184584 kubelet[2752]: E1123 23:02:16.183449 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:02:16.184584 kubelet[2752]: E1123 23:02:16.184110 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:02:16.278529 systemd-networkd[1408]: cali83b34cc8218: Gained IPv6LL Nov 23 23:02:16.469985 systemd-networkd[1408]: cali1bb9372eee6: Gained IPv6LL Nov 23 23:02:17.200657 kubelet[2752]: E1123 23:02:17.200109 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:02:23.872347 containerd[1555]: time="2025-11-23T23:02:23.871658330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:24.196557 containerd[1555]: time="2025-11-23T23:02:24.196354701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:24.198659 containerd[1555]: time="2025-11-23T23:02:24.198422545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:24.198894 containerd[1555]: time="2025-11-23T23:02:24.198701825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:24.199352 kubelet[2752]: E1123 23:02:24.199102 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:24.199352 kubelet[2752]: E1123 23:02:24.199253 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:24.200333 kubelet[2752]: E1123 23:02:24.199601 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:24.201257 containerd[1555]: time="2025-11-23T23:02:24.200179828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:24.541497 containerd[1555]: time="2025-11-23T23:02:24.541424093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:24.543076 containerd[1555]: time="2025-11-23T23:02:24.542777775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:24.543076 containerd[1555]: time="2025-11-23T23:02:24.542932935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:24.543250 kubelet[2752]: E1123 23:02:24.543119 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:24.543250 kubelet[2752]: E1123 23:02:24.543175 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:24.544148 kubelet[2752]: E1123 23:02:24.543533 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rsnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:24.544458 containerd[1555]: time="2025-11-23T23:02:24.543790257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:02:24.545104 kubelet[2752]: E1123 23:02:24.544992 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:24.898419 containerd[1555]: time="2025-11-23T23:02:24.898191504Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:24.900379 containerd[1555]: time="2025-11-23T23:02:24.900227387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:02:24.900379 containerd[1555]: time="2025-11-23T23:02:24.900307748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:02:24.900585 kubelet[2752]: E1123 23:02:24.900506 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:24.900683 kubelet[2752]: E1123 23:02:24.900615 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:24.901217 kubelet[2752]: E1123 23:02:24.900981 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a76a4d8a680f4210a2a45574ec2b4759,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:24.901774 containerd[1555]: time="2025-11-23T23:02:24.901183909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:25.244367 containerd[1555]: time="2025-11-23T23:02:25.243928150Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:25.245689 containerd[1555]: time="2025-11-23T23:02:25.245622713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:25.245834 containerd[1555]: time="2025-11-23T23:02:25.245746073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:25.246009 kubelet[2752]: E1123 23:02:25.245962 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:25.246462 kubelet[2752]: E1123 23:02:25.246024 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:25.246462 kubelet[2752]: E1123 23:02:25.246272 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:25.247443 containerd[1555]: time="2025-11-23T23:02:25.247377036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:02:25.248470 kubelet[2752]: E1123 23:02:25.248414 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:02:25.576070 containerd[1555]: time="2025-11-23T23:02:25.575784763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:25.577706 containerd[1555]: time="2025-11-23T23:02:25.577524926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:02:25.577706 containerd[1555]: time="2025-11-23T23:02:25.577570846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:25.578271 kubelet[2752]: E1123 23:02:25.578046 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:25.578271 kubelet[2752]: E1123 23:02:25.578100 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:25.578271 kubelet[2752]: E1123 23:02:25.578225 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:25.580350 kubelet[2752]: E1123 23:02:25.580264 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:02:25.871148 containerd[1555]: time="2025-11-23T23:02:25.870968350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:02:26.219594 containerd[1555]: time="2025-11-23T23:02:26.219350470Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:26.221217 containerd[1555]: time="2025-11-23T23:02:26.221054556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:02:26.221217 containerd[1555]: time="2025-11-23T23:02:26.221167393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:26.221883 kubelet[2752]: E1123 23:02:26.221835 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:26.221883 kubelet[2752]: E1123 23:02:26.221893 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:26.222055 kubelet[2752]: E1123 23:02:26.222009 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56947b74b7-c65fq_calico-system(c25375d2-2332-49bd-a8e3-61dfcb956c34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:26.224039 kubelet[2752]: E1123 23:02:26.223981 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:02:29.870361 containerd[1555]: time="2025-11-23T23:02:29.870006764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:30.200129 containerd[1555]: time="2025-11-23T23:02:30.199965148Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:30.202192 containerd[1555]: time="2025-11-23T23:02:30.202077230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:30.202353 containerd[1555]: time="2025-11-23T23:02:30.202178709Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:30.202439 kubelet[2752]: E1123 23:02:30.202387 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:30.202729 kubelet[2752]: E1123 23:02:30.202478 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:30.202729 kubelet[2752]: E1123 23:02:30.202662 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-lkrpd_calico-apiserver(8e7fff62-849b-430a-8c5a-7b0e171a5c60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:30.204346 kubelet[2752]: E1123 23:02:30.204282 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:02:30.870770 containerd[1555]: time="2025-11-23T23:02:30.869706473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:02:31.192385 containerd[1555]: time="2025-11-23T23:02:31.192189274Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:31.194556 containerd[1555]: time="2025-11-23T23:02:31.194404955Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:02:31.194556 containerd[1555]: time="2025-11-23T23:02:31.194482873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:31.194757 kubelet[2752]: E1123 23:02:31.194668 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:31.194757 kubelet[2752]: E1123 23:02:31.194719 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:02:31.195186 kubelet[2752]: E1123 23:02:31.194979 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msv7x_calico-system(0f266d4c-4f00-43ea-b251-4bdc9532cfcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:31.196519 kubelet[2752]: E1123 23:02:31.196392 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:02:31.196595 containerd[1555]: time="2025-11-23T23:02:31.196041086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:31.537648 containerd[1555]: time="2025-11-23T23:02:31.537082466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:31.538744 containerd[1555]: time="2025-11-23T23:02:31.538595759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:31.538882 containerd[1555]: time="2025-11-23T23:02:31.538696077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:31.539006 kubelet[2752]: E1123 23:02:31.538968 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:31.539608 kubelet[2752]: E1123 23:02:31.539360 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:31.539608 kubelet[2752]: E1123 23:02:31.539533 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8dqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f7f78c7-d8v97_calico-apiserver(1e722cd7-3fb4-43d9-b64b-32096b2087bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:31.541086 kubelet[2752]: E1123 23:02:31.541007 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:02:34.870360 kubelet[2752]: E1123 23:02:34.869850 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:35.873889 kubelet[2752]: E1123 23:02:35.873824 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:02:36.872599 kubelet[2752]: E1123 23:02:36.872535 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:02:39.874386 kubelet[2752]: E1123 23:02:39.873984 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:02:43.872142 kubelet[2752]: E1123 23:02:43.872010 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:02:44.869628 kubelet[2752]: E1123 23:02:44.869546 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:02:45.875046 containerd[1555]: time="2025-11-23T23:02:45.874007664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:45.877678 kubelet[2752]: E1123 23:02:45.875415 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:02:46.221709 containerd[1555]: time="2025-11-23T23:02:46.221317627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:46.223419 containerd[1555]: time="2025-11-23T23:02:46.223360043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:46.223540 containerd[1555]: time="2025-11-23T23:02:46.223467642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:46.225562 kubelet[2752]: E1123 23:02:46.225482 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:46.225562 kubelet[2752]: E1123 23:02:46.225541 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:46.226456 kubelet[2752]: E1123 23:02:46.226286 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rsnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:46.227622 kubelet[2752]: E1123 23:02:46.227578 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:46.872330 containerd[1555]: time="2025-11-23T23:02:46.872265224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:02:47.216786 containerd[1555]: time="2025-11-23T23:02:47.216638885Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:47.218010 containerd[1555]: time="2025-11-23T23:02:47.217938030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:02:47.218128 containerd[1555]: time="2025-11-23T23:02:47.218053989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:02:47.218898 kubelet[2752]: E1123 23:02:47.218279 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:47.219835 kubelet[2752]: E1123 23:02:47.219346 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:02:47.220036 kubelet[2752]: E1123 23:02:47.219984 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a76a4d8a680f4210a2a45574ec2b4759,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:47.223331 containerd[1555]: time="2025-11-23T23:02:47.222818094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:02:47.555184 containerd[1555]: time="2025-11-23T23:02:47.555067012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:47.558175 containerd[1555]: time="2025-11-23T23:02:47.558049578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:02:47.558572 containerd[1555]: time="2025-11-23T23:02:47.558217736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:47.558732 kubelet[2752]: E1123 23:02:47.558477 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:47.558732 kubelet[2752]: E1123 23:02:47.558551 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:02:47.559197 kubelet[2752]: E1123 23:02:47.558715 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:47.560486 kubelet[2752]: E1123 23:02:47.560418 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:02:49.873674 containerd[1555]: time="2025-11-23T23:02:49.873623620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:02:50.226232 containerd[1555]: time="2025-11-23T23:02:50.225689411Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:50.228650 containerd[1555]: time="2025-11-23T23:02:50.228088626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:02:50.228650 containerd[1555]: time="2025-11-23T23:02:50.228223304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:02:50.229426 kubelet[2752]: E1123 23:02:50.229214 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:50.230569 kubelet[2752]: E1123 23:02:50.229279 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:02:50.230569 kubelet[2752]: E1123 23:02:50.230499 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:50.238707 containerd[1555]: time="2025-11-23T23:02:50.238574675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:02:50.580102 containerd[1555]: time="2025-11-23T23:02:50.579868414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:50.581684 containerd[1555]: time="2025-11-23T23:02:50.581607636Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:02:50.582635 containerd[1555]: time="2025-11-23T23:02:50.581830553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:02:50.582756 kubelet[2752]: E1123 23:02:50.582040 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:50.582756 kubelet[2752]: E1123 23:02:50.582093 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:02:50.582756 kubelet[2752]: E1123 23:02:50.582212 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:50.585314 kubelet[2752]: E1123 23:02:50.583762 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:02:52.870860 containerd[1555]: time="2025-11-23T23:02:52.870449345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:02:53.223423 containerd[1555]: time="2025-11-23T23:02:53.222847523Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:53.227254 containerd[1555]: time="2025-11-23T23:02:53.227103841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:02:53.227254 containerd[1555]: time="2025-11-23T23:02:53.227218160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:02:53.227739 kubelet[2752]: E1123 23:02:53.227652 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:53.228072 kubelet[2752]: E1123 23:02:53.227742 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:02:53.230676 kubelet[2752]: E1123 23:02:53.227962 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56947b74b7-c65fq_calico-system(c25375d2-2332-49bd-a8e3-61dfcb956c34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:53.230676 kubelet[2752]: E1123 23:02:53.230569 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:02:57.872689 containerd[1555]: time="2025-11-23T23:02:57.872569321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:58.211694 containerd[1555]: time="2025-11-23T23:02:58.211418135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:58.213100 containerd[1555]: time="2025-11-23T23:02:58.212969122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:58.213100 containerd[1555]: time="2025-11-23T23:02:58.213040441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:58.213894 kubelet[2752]: E1123 23:02:58.213683 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:58.215375 kubelet[2752]: E1123 23:02:58.214597 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:58.215375 kubelet[2752]: E1123 23:02:58.215092 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8dqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f7f78c7-d8v97_calico-apiserver(1e722cd7-3fb4-43d9-b64b-32096b2087bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:58.216312 containerd[1555]: time="2025-11-23T23:02:58.216189254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:02:58.216579 kubelet[2752]: E1123 23:02:58.216452 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:02:58.558130 containerd[1555]: time="2025-11-23T23:02:58.558084109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:02:58.562137 containerd[1555]: time="2025-11-23T23:02:58.562071834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:02:58.562551 containerd[1555]: time="2025-11-23T23:02:58.562186193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:02:58.564554 kubelet[2752]: E1123 23:02:58.562518 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:58.564554 kubelet[2752]: E1123 23:02:58.562586 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:02:58.564554 kubelet[2752]: E1123 23:02:58.562738 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-lkrpd_calico-apiserver(8e7fff62-849b-430a-8c5a-7b0e171a5c60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:02:58.565965 kubelet[2752]: E1123 23:02:58.565536 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:02:59.871446 kubelet[2752]: E1123 23:02:59.871033 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:02:59.871947 containerd[1555]: time="2025-11-23T23:02:59.871158421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:03:00.219724 containerd[1555]: time="2025-11-23T23:03:00.219328334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:00.220832 containerd[1555]: time="2025-11-23T23:03:00.220722003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:03:00.220832 containerd[1555]: time="2025-11-23T23:03:00.220775802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:00.221108 kubelet[2752]: E1123 23:03:00.221019 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:00.221108 kubelet[2752]: E1123 23:03:00.221070 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:00.221260 kubelet[2752]: E1123 23:03:00.221192 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msv7x_calico-system(0f266d4c-4f00-43ea-b251-4bdc9532cfcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:00.223320 kubelet[2752]: E1123 23:03:00.222493 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:03:01.871156 kubelet[2752]: E1123 23:03:01.870924 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:03:01.872243 kubelet[2752]: E1123 23:03:01.871637 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:03:07.871515 kubelet[2752]: E1123 23:03:07.870278 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:03:10.871121 kubelet[2752]: E1123 23:03:10.870745 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:03:10.871710 kubelet[2752]: E1123 23:03:10.871674 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:03:13.871123 kubelet[2752]: E1123 23:03:13.870979 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:03:14.872608 kubelet[2752]: E1123 23:03:14.872480 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:03:14.876383 kubelet[2752]: E1123 23:03:14.872801 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:03:15.869855 kubelet[2752]: E1123 23:03:15.869657 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:03:18.869008 kubelet[2752]: E1123 23:03:18.868958 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:03:22.868785 kubelet[2752]: E1123 23:03:22.868720 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:03:23.872370 kubelet[2752]: E1123 23:03:23.871677 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:03:25.872058 kubelet[2752]: E1123 23:03:25.872006 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:03:26.870271 kubelet[2752]: E1123 23:03:26.870213 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:03:27.871911 kubelet[2752]: E1123 23:03:27.871845 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:03:28.869393 kubelet[2752]: E1123 23:03:28.869326 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:03:32.870246 kubelet[2752]: E1123 23:03:32.870200 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:03:34.871546 kubelet[2752]: E1123 23:03:34.871417 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:03:36.869946 kubelet[2752]: E1123 23:03:36.869791 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:03:36.873777 containerd[1555]: time="2025-11-23T23:03:36.870045933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:37.208344 containerd[1555]: time="2025-11-23T23:03:37.207829293Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:37.211514 containerd[1555]: time="2025-11-23T23:03:37.211359158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:37.211514 containerd[1555]: time="2025-11-23T23:03:37.211438238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:37.211717 kubelet[2752]: E1123 23:03:37.211668 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:37.211769 kubelet[2752]: E1123 23:03:37.211732 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:37.212028 kubelet[2752]: E1123 23:03:37.211980 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rsnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:37.213452 kubelet[2752]: E1123 23:03:37.213357 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:03:39.870471 containerd[1555]: time="2025-11-23T23:03:39.870421076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 23:03:40.208799 containerd[1555]: time="2025-11-23T23:03:40.206958619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:40.208799 containerd[1555]: time="2025-11-23T23:03:40.208581653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 23:03:40.208799 containerd[1555]: time="2025-11-23T23:03:40.208669372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 23:03:40.208985 kubelet[2752]: E1123 23:03:40.208807 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:40.208985 kubelet[2752]: E1123 23:03:40.208857 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 23:03:40.208985 kubelet[2752]: E1123 23:03:40.208960 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a76a4d8a680f4210a2a45574ec2b4759,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:40.212327 containerd[1555]: time="2025-11-23T23:03:40.212223998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 23:03:40.545970 containerd[1555]: time="2025-11-23T23:03:40.545798200Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:40.547003 containerd[1555]: time="2025-11-23T23:03:40.546866036Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 23:03:40.547003 containerd[1555]: time="2025-11-23T23:03:40.546966356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:40.547172 kubelet[2752]: E1123 23:03:40.547126 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:40.547224 kubelet[2752]: E1123 23:03:40.547179 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 23:03:40.547575 kubelet[2752]: E1123 23:03:40.547342 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqwf2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b9685c896-hmv6x_calico-system(c3a04e07-75ec-47a7-ac40-5bddb6afbad1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:40.548759 kubelet[2752]: E1123 23:03:40.548702 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:03:40.871271 containerd[1555]: time="2025-11-23T23:03:40.870729116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:03:41.203524 containerd[1555]: time="2025-11-23T23:03:41.203063454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:41.204364 containerd[1555]: time="2025-11-23T23:03:41.204232889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:03:41.205341 containerd[1555]: time="2025-11-23T23:03:41.204323849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:03:41.205472 kubelet[2752]: E1123 23:03:41.204698 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:41.205472 kubelet[2752]: E1123 23:03:41.204764 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:03:41.205472 kubelet[2752]: E1123 23:03:41.204900 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:41.215037 containerd[1555]: time="2025-11-23T23:03:41.214972288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:03:41.545507 containerd[1555]: time="2025-11-23T23:03:41.545460479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:41.547130 containerd[1555]: time="2025-11-23T23:03:41.546772474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:03:41.547256 containerd[1555]: time="2025-11-23T23:03:41.547131593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:03:41.547566 kubelet[2752]: E1123 23:03:41.547451 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:41.547566 kubelet[2752]: E1123 23:03:41.547505 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:03:41.547908 kubelet[2752]: E1123 23:03:41.547709 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:41.549145 kubelet[2752]: E1123 23:03:41.549099 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:03:42.870382 containerd[1555]: time="2025-11-23T23:03:42.869565363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 23:03:43.183880 containerd[1555]: time="2025-11-23T23:03:43.183765764Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:43.186403 containerd[1555]: time="2025-11-23T23:03:43.186326754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 23:03:43.186558 containerd[1555]: time="2025-11-23T23:03:43.186455834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:43.188330 kubelet[2752]: E1123 23:03:43.186829 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:43.189182 kubelet[2752]: E1123 23:03:43.188878 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 23:03:43.189182 kubelet[2752]: E1123 23:03:43.189102 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bpn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-msv7x_calico-system(0f266d4c-4f00-43ea-b251-4bdc9532cfcf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:43.190427 kubelet[2752]: E1123 23:03:43.190388 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:03:44.869973 containerd[1555]: time="2025-11-23T23:03:44.869922126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 23:03:45.193408 containerd[1555]: time="2025-11-23T23:03:45.192706725Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:45.194627 containerd[1555]: time="2025-11-23T23:03:45.194479718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 23:03:45.194627 containerd[1555]: time="2025-11-23T23:03:45.194594038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 23:03:45.195676 kubelet[2752]: E1123 23:03:45.195604 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:03:45.196032 kubelet[2752]: E1123 23:03:45.195681 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 23:03:45.196782 kubelet[2752]: E1123 23:03:45.196707 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cf2rc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56947b74b7-c65fq_calico-system(c25375d2-2332-49bd-a8e3-61dfcb956c34): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:45.197944 kubelet[2752]: E1123 23:03:45.197894 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:03:47.871879 kubelet[2752]: E1123 23:03:47.871273 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:03:49.873887 containerd[1555]: time="2025-11-23T23:03:49.873755504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:50.217867 containerd[1555]: time="2025-11-23T23:03:50.217421221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:50.219053 containerd[1555]: time="2025-11-23T23:03:50.218905935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:50.219479 containerd[1555]: time="2025-11-23T23:03:50.218986695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:50.219832 kubelet[2752]: E1123 23:03:50.219709 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:50.219832 kubelet[2752]: E1123 23:03:50.219798 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:50.220688 kubelet[2752]: E1123 23:03:50.220557 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbmwl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-lkrpd_calico-apiserver(8e7fff62-849b-430a-8c5a-7b0e171a5c60): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:50.221977 kubelet[2752]: E1123 23:03:50.221915 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:03:51.870489 containerd[1555]: time="2025-11-23T23:03:51.870144213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:03:52.196438 containerd[1555]: time="2025-11-23T23:03:52.196033057Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:03:52.198118 containerd[1555]: time="2025-11-23T23:03:52.198038650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:03:52.198312 containerd[1555]: time="2025-11-23T23:03:52.198169610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:03:52.198551 kubelet[2752]: E1123 23:03:52.198467 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:52.198891 kubelet[2752]: E1123 23:03:52.198574 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:03:52.198891 kubelet[2752]: E1123 23:03:52.198802 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8dqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5f7f78c7-d8v97_calico-apiserver(1e722cd7-3fb4-43d9-b64b-32096b2087bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:03:52.200438 kubelet[2752]: E1123 23:03:52.200379 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:03:52.876338 kubelet[2752]: E1123 23:03:52.876194 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:03:56.872529 kubelet[2752]: E1123 23:03:56.872177 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:03:56.874814 kubelet[2752]: E1123 23:03:56.874757 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:03:57.871337 kubelet[2752]: E1123 23:03:57.871043 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:03:58.142721 systemd[1]: Started sshd@7-49.12.4.178:22-139.178.68.195:35122.service - OpenSSH per-connection server daemon (139.178.68.195:35122). Nov 23 23:03:58.870520 kubelet[2752]: E1123 23:03:58.870478 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:03:59.140842 sshd[5017]: Accepted publickey for core from 139.178.68.195 port 35122 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:03:59.143133 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:03:59.153729 systemd-logind[1519]: New session 8 of user core. Nov 23 23:03:59.159558 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 23:03:59.908155 sshd[5020]: Connection closed by 139.178.68.195 port 35122 Nov 23 23:03:59.908675 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Nov 23 23:03:59.916111 systemd[1]: sshd@7-49.12.4.178:22-139.178.68.195:35122.service: Deactivated successfully. Nov 23 23:03:59.920729 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 23:03:59.923360 systemd-logind[1519]: Session 8 logged out. Waiting for processes to exit. Nov 23 23:03:59.926727 systemd-logind[1519]: Removed session 8. Nov 23 23:04:01.874245 kubelet[2752]: E1123 23:04:01.874133 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:04:03.876555 kubelet[2752]: E1123 23:04:03.876415 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:04:05.084594 systemd[1]: Started sshd@8-49.12.4.178:22-139.178.68.195:53206.service - OpenSSH per-connection server daemon (139.178.68.195:53206). Nov 23 23:04:06.076416 sshd[5039]: Accepted publickey for core from 139.178.68.195 port 53206 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:06.078695 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:06.089066 systemd-logind[1519]: New session 9 of user core. Nov 23 23:04:06.094745 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 23:04:06.871321 kubelet[2752]: E1123 23:04:06.869669 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:04:06.878349 sshd[5042]: Connection closed by 139.178.68.195 port 53206 Nov 23 23:04:06.879316 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:06.887929 systemd-logind[1519]: Session 9 logged out. Waiting for processes to exit. Nov 23 23:04:06.888236 systemd[1]: sshd@8-49.12.4.178:22-139.178.68.195:53206.service: Deactivated successfully. Nov 23 23:04:06.892968 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 23:04:06.898481 systemd-logind[1519]: Removed session 9. Nov 23 23:04:08.876627 kubelet[2752]: E1123 23:04:08.876569 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:04:09.872323 kubelet[2752]: E1123 23:04:09.870031 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:04:11.871021 kubelet[2752]: E1123 23:04:11.870890 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:04:12.045824 systemd[1]: Started sshd@9-49.12.4.178:22-139.178.68.195:54534.service - OpenSSH per-connection server daemon (139.178.68.195:54534). Nov 23 23:04:12.869162 kubelet[2752]: E1123 23:04:12.868718 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:04:13.037394 sshd[5080]: Accepted publickey for core from 139.178.68.195 port 54534 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:13.040423 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:13.050467 systemd-logind[1519]: New session 10 of user core. Nov 23 23:04:13.053515 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 23:04:13.828158 sshd[5084]: Connection closed by 139.178.68.195 port 54534 Nov 23 23:04:13.828801 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:13.834003 systemd-logind[1519]: Session 10 logged out. Waiting for processes to exit. Nov 23 23:04:13.837531 systemd[1]: sshd@9-49.12.4.178:22-139.178.68.195:54534.service: Deactivated successfully. Nov 23 23:04:13.840209 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 23:04:13.843729 systemd-logind[1519]: Removed session 10. Nov 23 23:04:13.874417 kubelet[2752]: E1123 23:04:13.874306 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:04:13.997670 systemd[1]: Started sshd@10-49.12.4.178:22-139.178.68.195:54540.service - OpenSSH per-connection server daemon (139.178.68.195:54540). Nov 23 23:04:14.992968 sshd[5100]: Accepted publickey for core from 139.178.68.195 port 54540 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:14.996420 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:15.006369 systemd-logind[1519]: New session 11 of user core. Nov 23 23:04:15.010412 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 23:04:15.805510 sshd[5103]: Connection closed by 139.178.68.195 port 54540 Nov 23 23:04:15.806614 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:15.812631 systemd[1]: sshd@10-49.12.4.178:22-139.178.68.195:54540.service: Deactivated successfully. Nov 23 23:04:15.812815 systemd-logind[1519]: Session 11 logged out. Waiting for processes to exit. Nov 23 23:04:15.816217 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 23:04:15.818268 systemd-logind[1519]: Removed session 11. Nov 23 23:04:15.881402 kubelet[2752]: E1123 23:04:15.880871 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:04:15.972617 systemd[1]: Started sshd@11-49.12.4.178:22-139.178.68.195:54542.service - OpenSSH per-connection server daemon (139.178.68.195:54542). Nov 23 23:04:16.957640 sshd[5113]: Accepted publickey for core from 139.178.68.195 port 54542 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:16.959486 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:16.966363 systemd-logind[1519]: New session 12 of user core. Nov 23 23:04:16.969473 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 23:04:17.720942 sshd[5116]: Connection closed by 139.178.68.195 port 54542 Nov 23 23:04:17.721910 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:17.730377 systemd-logind[1519]: Session 12 logged out. Waiting for processes to exit. Nov 23 23:04:17.730784 systemd[1]: sshd@11-49.12.4.178:22-139.178.68.195:54542.service: Deactivated successfully. Nov 23 23:04:17.736258 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 23:04:17.742134 systemd-logind[1519]: Removed session 12. Nov 23 23:04:19.875724 kubelet[2752]: E1123 23:04:19.874963 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:04:22.869335 kubelet[2752]: E1123 23:04:22.869263 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:04:22.896252 systemd[1]: Started sshd@12-49.12.4.178:22-139.178.68.195:55320.service - OpenSSH per-connection server daemon (139.178.68.195:55320). Nov 23 23:04:23.873785 kubelet[2752]: E1123 23:04:23.873729 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:04:23.902167 sshd[5128]: Accepted publickey for core from 139.178.68.195 port 55320 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:23.907350 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:23.914880 systemd-logind[1519]: New session 13 of user core. Nov 23 23:04:23.920542 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 23:04:24.695310 sshd[5133]: Connection closed by 139.178.68.195 port 55320 Nov 23 23:04:24.696650 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:24.704516 systemd-logind[1519]: Session 13 logged out. Waiting for processes to exit. Nov 23 23:04:24.705105 systemd[1]: sshd@12-49.12.4.178:22-139.178.68.195:55320.service: Deactivated successfully. Nov 23 23:04:24.708352 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 23:04:24.713119 systemd-logind[1519]: Removed session 13. Nov 23 23:04:24.867062 systemd[1]: Started sshd@13-49.12.4.178:22-139.178.68.195:55332.service - OpenSSH per-connection server daemon (139.178.68.195:55332). Nov 23 23:04:24.871523 kubelet[2752]: E1123 23:04:24.871441 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:04:25.862039 sshd[5145]: Accepted publickey for core from 139.178.68.195 port 55332 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:25.864670 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:25.874545 systemd-logind[1519]: New session 14 of user core. Nov 23 23:04:25.880577 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 23:04:26.791751 sshd[5148]: Connection closed by 139.178.68.195 port 55332 Nov 23 23:04:26.795532 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:26.802784 systemd-logind[1519]: Session 14 logged out. Waiting for processes to exit. Nov 23 23:04:26.803804 systemd[1]: sshd@13-49.12.4.178:22-139.178.68.195:55332.service: Deactivated successfully. Nov 23 23:04:26.807416 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 23:04:26.810789 systemd-logind[1519]: Removed session 14. Nov 23 23:04:26.869691 kubelet[2752]: E1123 23:04:26.869256 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:04:26.870506 kubelet[2752]: E1123 23:04:26.870451 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:04:26.961655 systemd[1]: Started sshd@14-49.12.4.178:22-139.178.68.195:55336.service - OpenSSH per-connection server daemon (139.178.68.195:55336). Nov 23 23:04:27.958284 sshd[5159]: Accepted publickey for core from 139.178.68.195 port 55336 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:27.962677 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:27.971200 systemd-logind[1519]: New session 15 of user core. Nov 23 23:04:27.977573 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 23:04:28.875763 kubelet[2752]: E1123 23:04:28.875679 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:04:29.462539 sshd[5162]: Connection closed by 139.178.68.195 port 55336 Nov 23 23:04:29.464000 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:29.471455 systemd[1]: sshd@14-49.12.4.178:22-139.178.68.195:55336.service: Deactivated successfully. Nov 23 23:04:29.472104 systemd-logind[1519]: Session 15 logged out. Waiting for processes to exit. Nov 23 23:04:29.477076 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 23:04:29.479848 systemd-logind[1519]: Removed session 15. Nov 23 23:04:29.629515 systemd[1]: Started sshd@15-49.12.4.178:22-139.178.68.195:55352.service - OpenSSH per-connection server daemon (139.178.68.195:55352). Nov 23 23:04:30.635351 sshd[5185]: Accepted publickey for core from 139.178.68.195 port 55352 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:30.635570 sshd-session[5185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:30.644717 systemd-logind[1519]: New session 16 of user core. Nov 23 23:04:30.650670 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 23:04:31.644814 sshd[5188]: Connection closed by 139.178.68.195 port 55352 Nov 23 23:04:31.645689 sshd-session[5185]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:31.653848 systemd-logind[1519]: Session 16 logged out. Waiting for processes to exit. Nov 23 23:04:31.655665 systemd[1]: sshd@15-49.12.4.178:22-139.178.68.195:55352.service: Deactivated successfully. Nov 23 23:04:31.662579 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 23:04:31.670013 systemd-logind[1519]: Removed session 16. Nov 23 23:04:31.817960 systemd[1]: Started sshd@16-49.12.4.178:22-139.178.68.195:39274.service - OpenSSH per-connection server daemon (139.178.68.195:39274). Nov 23 23:04:31.875801 kubelet[2752]: E1123 23:04:31.875736 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:04:32.823225 sshd[5199]: Accepted publickey for core from 139.178.68.195 port 39274 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:32.824355 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:32.833565 systemd-logind[1519]: New session 17 of user core. Nov 23 23:04:32.839551 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 23:04:33.587429 sshd[5202]: Connection closed by 139.178.68.195 port 39274 Nov 23 23:04:33.589691 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:33.595954 systemd[1]: sshd@16-49.12.4.178:22-139.178.68.195:39274.service: Deactivated successfully. Nov 23 23:04:33.598517 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 23:04:33.602933 systemd-logind[1519]: Session 17 logged out. Waiting for processes to exit. Nov 23 23:04:33.605803 systemd-logind[1519]: Removed session 17. Nov 23 23:04:35.874321 kubelet[2752]: E1123 23:04:35.872013 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:04:35.875072 kubelet[2752]: E1123 23:04:35.875032 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:04:37.873994 kubelet[2752]: E1123 23:04:37.873904 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:04:37.875249 kubelet[2752]: E1123 23:04:37.875086 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:04:38.755592 systemd[1]: Started sshd@17-49.12.4.178:22-139.178.68.195:39282.service - OpenSSH per-connection server daemon (139.178.68.195:39282). Nov 23 23:04:38.869710 kubelet[2752]: E1123 23:04:38.869347 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:04:39.739533 sshd[5240]: Accepted publickey for core from 139.178.68.195 port 39282 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:39.741964 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:39.750345 systemd-logind[1519]: New session 18 of user core. Nov 23 23:04:39.755526 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 23:04:40.494941 sshd[5243]: Connection closed by 139.178.68.195 port 39282 Nov 23 23:04:40.494815 sshd-session[5240]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:40.502282 systemd-logind[1519]: Session 18 logged out. Waiting for processes to exit. Nov 23 23:04:40.502437 systemd[1]: sshd@17-49.12.4.178:22-139.178.68.195:39282.service: Deactivated successfully. Nov 23 23:04:40.506021 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 23:04:40.510655 systemd-logind[1519]: Removed session 18. Nov 23 23:04:43.873248 kubelet[2752]: E1123 23:04:43.873116 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:04:44.871415 kubelet[2752]: E1123 23:04:44.871342 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:04:45.696662 systemd[1]: Started sshd@18-49.12.4.178:22-139.178.68.195:47362.service - OpenSSH per-connection server daemon (139.178.68.195:47362). Nov 23 23:04:46.770764 sshd[5257]: Accepted publickey for core from 139.178.68.195 port 47362 ssh2: RSA SHA256:ciiN5dbxR9M6TNI3S4kKQ29WbGUUBM+/gmA9qElCjbc Nov 23 23:04:46.773586 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 23:04:46.780555 systemd-logind[1519]: New session 19 of user core. Nov 23 23:04:46.785806 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 23:04:46.869268 kubelet[2752]: E1123 23:04:46.869023 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:04:47.594330 sshd[5260]: Connection closed by 139.178.68.195 port 47362 Nov 23 23:04:47.593374 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Nov 23 23:04:47.601761 systemd[1]: sshd@18-49.12.4.178:22-139.178.68.195:47362.service: Deactivated successfully. Nov 23 23:04:47.607553 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 23:04:47.608809 systemd-logind[1519]: Session 19 logged out. Waiting for processes to exit. Nov 23 23:04:47.612717 systemd-logind[1519]: Removed session 19. Nov 23 23:04:49.870383 kubelet[2752]: E1123 23:04:49.869836 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:04:50.870514 kubelet[2752]: E1123 23:04:50.870466 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:04:51.871683 kubelet[2752]: E1123 23:04:51.871555 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:04:51.873665 kubelet[2752]: E1123 23:04:51.873352 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:04:56.872798 kubelet[2752]: E1123 23:04:56.872173 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5f7f78c7-d8v97" podUID="1e722cd7-3fb4-43d9-b64b-32096b2087bd" Nov 23 23:04:56.878763 kubelet[2752]: E1123 23:04:56.878712 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b9685c896-hmv6x" podUID="c3a04e07-75ec-47a7-ac40-5bddb6afbad1" Nov 23 23:04:58.868995 kubelet[2752]: E1123 23:04:58.868898 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-lkrpd" podUID="8e7fff62-849b-430a-8c5a-7b0e171a5c60" Nov 23 23:05:02.219470 systemd[1]: cri-containerd-6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892.scope: Deactivated successfully. Nov 23 23:05:02.219926 systemd[1]: cri-containerd-6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892.scope: Consumed 4.092s CPU time, 62M memory peak, 3.3M read from disk. Nov 23 23:05:02.223128 containerd[1555]: time="2025-11-23T23:05:02.223082209Z" level=info msg="received container exit event container_id:\"6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892\" id:\"6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892\" pid:2576 exit_status:1 exited_at:{seconds:1763939102 nanos:222615124}" Nov 23 23:05:02.252514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892-rootfs.mount: Deactivated successfully. Nov 23 23:05:02.302215 kubelet[2752]: E1123 23:05:02.296616 2752 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56584->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-msv7x.187ac51de0bb80e3 calico-system 1748 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-msv7x,UID:0f266d4c-4f00-43ea-b251-4bdc9532cfcf,APIVersion:v1,ResourceVersion:804,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-1-d-6a40a07c08,},FirstTimestamp:2025-11-23 23:02:16 +0000 UTC,LastTimestamp:2025-11-23 23:04:51.871506523 +0000 UTC m=+208.130087878,Count:10,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-1-d-6a40a07c08,}" Nov 23 23:05:02.681063 kubelet[2752]: E1123 23:05:02.680982 2752 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56764->10.0.0.2:2379: read: connection timed out" Nov 23 23:05:02.738078 kubelet[2752]: I1123 23:05:02.737994 2752 scope.go:117] "RemoveContainer" containerID="6ba48f3e693e1cefa73e642b79b16227f5cdf901472f666657ddf125f613c892" Nov 23 23:05:02.753333 containerd[1555]: time="2025-11-23T23:05:02.753266304Z" level=info msg="CreateContainer within sandbox \"904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 23:05:02.765491 containerd[1555]: time="2025-11-23T23:05:02.765427018Z" level=info msg="Container 78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:02.772728 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733606681.mount: Deactivated successfully. Nov 23 23:05:02.780047 containerd[1555]: time="2025-11-23T23:05:02.779925155Z" level=info msg="CreateContainer within sandbox \"904cb53be75b41774c429eb5f8c284ae1f1861c2cd9689bdd0f4fdad7b21f398\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896\"" Nov 23 23:05:02.785066 containerd[1555]: time="2025-11-23T23:05:02.784946162Z" level=info msg="StartContainer for \"78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896\"" Nov 23 23:05:02.787190 containerd[1555]: time="2025-11-23T23:05:02.787127182Z" level=info msg="connecting to shim 78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896" address="unix:///run/containerd/s/0234dcd8f7c6f8635f2ae80b9ff06e6daf7dc3e22f1cd4ea25ba0526fb2600b9" protocol=ttrpc version=3 Nov 23 23:05:02.817634 systemd[1]: Started cri-containerd-78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896.scope - libcontainer container 78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896. Nov 23 23:05:02.869813 containerd[1555]: time="2025-11-23T23:05:02.869754198Z" level=info msg="StartContainer for \"78498cc868945fe53de17375597bcfa34c3128ca1af72d83ae8b20ffd2c9f896\" returns successfully" Nov 23 23:05:02.871679 kubelet[2752]: E1123 23:05:02.871598 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-msv7x" podUID="0f266d4c-4f00-43ea-b251-4bdc9532cfcf" Nov 23 23:05:03.223577 systemd[1]: cri-containerd-d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b.scope: Deactivated successfully. Nov 23 23:05:03.224196 systemd[1]: cri-containerd-d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b.scope: Consumed 39.555s CPU time, 109.3M memory peak. Nov 23 23:05:03.225553 containerd[1555]: time="2025-11-23T23:05:03.225446621Z" level=info msg="received container exit event container_id:\"d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b\" id:\"d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b\" pid:3072 exit_status:1 exited_at:{seconds:1763939103 nanos:223985568}" Nov 23 23:05:03.258044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b-rootfs.mount: Deactivated successfully. Nov 23 23:05:03.747610 kubelet[2752]: I1123 23:05:03.747403 2752 scope.go:117] "RemoveContainer" containerID="d8d413300e4a1982a0749a51e6ff3ac7392d74edd0d2d8beea93b50924c5957b" Nov 23 23:05:03.749859 containerd[1555]: time="2025-11-23T23:05:03.749821463Z" level=info msg="CreateContainer within sandbox \"deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 23:05:03.763198 containerd[1555]: time="2025-11-23T23:05:03.762532580Z" level=info msg="Container 525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f: CDI devices from CRI Config.CDIDevices: []" Nov 23 23:05:03.771389 containerd[1555]: time="2025-11-23T23:05:03.771279421Z" level=info msg="CreateContainer within sandbox \"deeab8a5b7c672d633f8ed21010e5ffc5a19361e0271255097ff0f6e3f31c186\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f\"" Nov 23 23:05:03.772082 containerd[1555]: time="2025-11-23T23:05:03.772053268Z" level=info msg="StartContainer for \"525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f\"" Nov 23 23:05:03.773791 containerd[1555]: time="2025-11-23T23:05:03.773749884Z" level=info msg="connecting to shim 525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f" address="unix:///run/containerd/s/82a637c727396d22a6d98a804ef97588cfef2e6615ca1ff32c3d50b525eb9fa6" protocol=ttrpc version=3 Nov 23 23:05:03.800473 systemd[1]: Started cri-containerd-525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f.scope - libcontainer container 525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f. Nov 23 23:05:03.847525 containerd[1555]: time="2025-11-23T23:05:03.847463564Z" level=info msg="StartContainer for \"525c7c1130bbdb969878762ffff854418e2f712a899bc1305319934854605e3f\" returns successfully" Nov 23 23:05:03.870880 containerd[1555]: time="2025-11-23T23:05:03.870814740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 23:05:03.996279 systemd[1]: Started sshd@19-49.12.4.178:22-80.94.92.40:45112.service - OpenSSH per-connection server daemon (80.94.92.40:45112). Nov 23 23:05:04.059059 sshd[5369]: Connection closed by 80.94.92.40 port 45112 Nov 23 23:05:04.060644 systemd[1]: sshd@19-49.12.4.178:22-80.94.92.40:45112.service: Deactivated successfully. Nov 23 23:05:04.220550 containerd[1555]: time="2025-11-23T23:05:04.220260853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:04.222012 containerd[1555]: time="2025-11-23T23:05:04.221963108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 23:05:04.222343 containerd[1555]: time="2025-11-23T23:05:04.222164910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 23:05:04.222669 kubelet[2752]: E1123 23:05:04.222605 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:04.222823 kubelet[2752]: E1123 23:05:04.222745 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 23:05:04.223086 kubelet[2752]: E1123 23:05:04.222974 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rsnf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b9f97f6d6-6nxhq_calico-apiserver(3b014a55-de73-4ac9-9e35-2cc72ed4bcca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:04.225095 kubelet[2752]: E1123 23:05:04.225004 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b9f97f6d6-6nxhq" podUID="3b014a55-de73-4ac9-9e35-2cc72ed4bcca" Nov 23 23:05:04.791988 kubelet[2752]: I1123 23:05:04.791935 2752 status_manager.go:890] "Failed to get status for pod" podUID="445131c16ed70449727193d47e83fee7" pod="kube-system/kube-controller-manager-ci-4459-2-1-d-6a40a07c08" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56676->10.0.0.2:2379: read: connection timed out" Nov 23 23:05:04.870769 kubelet[2752]: E1123 23:05:04.870648 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56947b74b7-c65fq" podUID="c25375d2-2332-49bd-a8e3-61dfcb956c34" Nov 23 23:05:05.869636 containerd[1555]: time="2025-11-23T23:05:05.869568064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 23:05:06.218006 containerd[1555]: time="2025-11-23T23:05:06.217639383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:06.219495 containerd[1555]: time="2025-11-23T23:05:06.219431038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 23:05:06.219644 containerd[1555]: time="2025-11-23T23:05:06.219532799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 23:05:06.219774 kubelet[2752]: E1123 23:05:06.219679 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:05:06.219774 kubelet[2752]: E1123 23:05:06.219730 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 23:05:06.220215 kubelet[2752]: E1123 23:05:06.219844 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:06.222759 containerd[1555]: time="2025-11-23T23:05:06.222154342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 23:05:06.556750 containerd[1555]: time="2025-11-23T23:05:06.556244599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 23:05:06.558162 containerd[1555]: time="2025-11-23T23:05:06.558094215Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 23:05:06.558366 containerd[1555]: time="2025-11-23T23:05:06.558192976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 23:05:06.558574 kubelet[2752]: E1123 23:05:06.558498 2752 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:05:06.558574 kubelet[2752]: E1123 23:05:06.558558 2752 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 23:05:06.558898 kubelet[2752]: E1123 23:05:06.558835 2752 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jw57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qcdmk_calico-system(65c6ee75-f266-4d8e-9f91-7935bbe3f792): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 23:05:06.560225 kubelet[2752]: E1123 23:05:06.560153 2752 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qcdmk" podUID="65c6ee75-f266-4d8e-9f91-7935bbe3f792" Nov 23 23:05:07.920797 systemd[1]: Started sshd@20-49.12.4.178:22-147.185.132.141:59866.service - OpenSSH per-connection server daemon (147.185.132.141:59866).