Nov 23 22:54:15.803430 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 22:54:15.803454 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 22:54:15.803464 kernel: KASLR enabled Nov 23 22:54:15.803470 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 23 22:54:15.803475 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Nov 23 22:54:15.803481 kernel: random: crng init done Nov 23 22:54:15.803487 kernel: secureboot: Secure boot disabled Nov 23 22:54:15.803493 kernel: ACPI: Early table checksum verification disabled Nov 23 22:54:15.803498 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 23 22:54:15.803504 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 23 22:54:15.803512 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803518 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803523 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803530 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803537 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803544 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803550 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803556 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803562 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:15.803568 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 23 22:54:15.803574 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 23 22:54:15.803580 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 22:54:15.803586 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 22:54:15.803592 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Nov 23 22:54:15.803598 kernel: Zone ranges: Nov 23 22:54:15.803604 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 22:54:15.803612 kernel: DMA32 empty Nov 23 22:54:15.803617 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 23 22:54:15.803637 kernel: Device empty Nov 23 22:54:15.803643 kernel: Movable zone start for each node Nov 23 22:54:15.803649 kernel: Early memory node ranges Nov 23 22:54:15.803655 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Nov 23 22:54:15.803661 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Nov 23 22:54:15.803667 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Nov 23 22:54:15.803673 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 23 22:54:15.803679 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 23 22:54:15.803685 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 23 22:54:15.803691 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 23 22:54:15.803699 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 23 22:54:15.803705 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 23 22:54:15.803713 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 22:54:15.803719 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 23 22:54:15.803726 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Nov 23 22:54:15.803734 kernel: psci: probing for conduit method from ACPI. Nov 23 22:54:15.803740 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 22:54:15.803746 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 22:54:15.803753 kernel: psci: Trusted OS migration not required Nov 23 22:54:15.803759 kernel: psci: SMC Calling Convention v1.1 Nov 23 22:54:15.803765 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 22:54:15.803771 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 22:54:15.803778 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 22:54:15.803784 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 22:54:15.803791 kernel: Detected PIPT I-cache on CPU0 Nov 23 22:54:15.803797 kernel: CPU features: detected: GIC system register CPU interface Nov 23 22:54:15.803804 kernel: CPU features: detected: Spectre-v4 Nov 23 22:54:15.803811 kernel: CPU features: detected: Spectre-BHB Nov 23 22:54:15.803817 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 22:54:15.803823 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 22:54:15.803830 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 22:54:15.803836 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 22:54:15.803842 kernel: alternatives: applying boot alternatives Nov 23 22:54:15.803850 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:54:15.803857 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 22:54:15.803863 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 22:54:15.803870 kernel: Fallback order for Node 0: 0 Nov 23 22:54:15.803877 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Nov 23 22:54:15.803884 kernel: Policy zone: Normal Nov 23 22:54:15.803890 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 22:54:15.803896 kernel: software IO TLB: area num 2. Nov 23 22:54:15.803903 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Nov 23 22:54:15.803909 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 22:54:15.803915 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 22:54:15.803922 kernel: rcu: RCU event tracing is enabled. Nov 23 22:54:15.803929 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 22:54:15.803935 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 22:54:15.803942 kernel: Tracing variant of Tasks RCU enabled. Nov 23 22:54:15.803948 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 22:54:15.803956 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 22:54:15.803962 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:54:15.803969 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:54:15.803976 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 22:54:15.803982 kernel: GICv3: 256 SPIs implemented Nov 23 22:54:15.803988 kernel: GICv3: 0 Extended SPIs implemented Nov 23 22:54:15.803994 kernel: Root IRQ handler: gic_handle_irq Nov 23 22:54:15.804000 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 22:54:15.804007 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 22:54:15.804013 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 22:54:15.804019 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 22:54:15.804028 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Nov 23 22:54:15.804034 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Nov 23 22:54:15.804041 kernel: GICv3: using LPI property table @0x0000000100120000 Nov 23 22:54:15.804047 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Nov 23 22:54:15.804054 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 22:54:15.804060 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:54:15.804066 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 22:54:15.804073 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 22:54:15.804079 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 22:54:15.804086 kernel: Console: colour dummy device 80x25 Nov 23 22:54:15.804092 kernel: ACPI: Core revision 20240827 Nov 23 22:54:15.804100 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 22:54:15.804107 kernel: pid_max: default: 32768 minimum: 301 Nov 23 22:54:15.804113 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 22:54:15.804120 kernel: landlock: Up and running. Nov 23 22:54:15.804126 kernel: SELinux: Initializing. Nov 23 22:54:15.804133 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:54:15.804140 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:54:15.804146 kernel: rcu: Hierarchical SRCU implementation. Nov 23 22:54:15.804153 kernel: rcu: Max phase no-delay instances is 400. Nov 23 22:54:15.804161 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 22:54:15.804168 kernel: Remapping and enabling EFI services. Nov 23 22:54:15.804174 kernel: smp: Bringing up secondary CPUs ... Nov 23 22:54:15.804181 kernel: Detected PIPT I-cache on CPU1 Nov 23 22:54:15.804187 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 22:54:15.804194 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Nov 23 22:54:15.804201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:54:15.804207 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 22:54:15.804213 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 22:54:15.804222 kernel: SMP: Total of 2 processors activated. Nov 23 22:54:15.804233 kernel: CPU: All CPU(s) started at EL1 Nov 23 22:54:15.804240 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 22:54:15.804248 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 22:54:15.804255 kernel: CPU features: detected: Common not Private translations Nov 23 22:54:15.804262 kernel: CPU features: detected: CRC32 instructions Nov 23 22:54:15.804269 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 22:54:15.804276 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 22:54:15.804284 kernel: CPU features: detected: LSE atomic instructions Nov 23 22:54:15.804291 kernel: CPU features: detected: Privileged Access Never Nov 23 22:54:15.804298 kernel: CPU features: detected: RAS Extension Support Nov 23 22:54:15.804305 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 22:54:15.804312 kernel: alternatives: applying system-wide alternatives Nov 23 22:54:15.804319 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 23 22:54:15.804326 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Nov 23 22:54:15.804333 kernel: devtmpfs: initialized Nov 23 22:54:15.804341 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 22:54:15.804349 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 22:54:15.804356 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 22:54:15.804371 kernel: 0 pages in range for non-PLT usage Nov 23 22:54:15.804380 kernel: 508400 pages in range for PLT usage Nov 23 22:54:15.804387 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 22:54:15.804394 kernel: SMBIOS 3.0.0 present. Nov 23 22:54:15.804401 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 23 22:54:15.804408 kernel: DMI: Memory slots populated: 1/1 Nov 23 22:54:15.804415 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 22:54:15.804424 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 22:54:15.804431 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 22:54:15.804438 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 22:54:15.804445 kernel: audit: initializing netlink subsys (disabled) Nov 23 22:54:15.804452 kernel: audit: type=2000 audit(0.015:1): state=initialized audit_enabled=0 res=1 Nov 23 22:54:15.804459 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 22:54:15.804466 kernel: cpuidle: using governor menu Nov 23 22:54:15.804473 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 22:54:15.804480 kernel: ASID allocator initialised with 32768 entries Nov 23 22:54:15.804488 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 22:54:15.804495 kernel: Serial: AMBA PL011 UART driver Nov 23 22:54:15.804502 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 22:54:15.806432 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 22:54:15.806444 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 22:54:15.806452 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 22:54:15.806459 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 22:54:15.806466 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 22:54:15.806474 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 22:54:15.806487 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 22:54:15.806494 kernel: ACPI: Added _OSI(Module Device) Nov 23 22:54:15.806501 kernel: ACPI: Added _OSI(Processor Device) Nov 23 22:54:15.806508 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 22:54:15.806515 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 22:54:15.806522 kernel: ACPI: Interpreter enabled Nov 23 22:54:15.806529 kernel: ACPI: Using GIC for interrupt routing Nov 23 22:54:15.806536 kernel: ACPI: MCFG table detected, 1 entries Nov 23 22:54:15.806543 kernel: ACPI: CPU0 has been hot-added Nov 23 22:54:15.806552 kernel: ACPI: CPU1 has been hot-added Nov 23 22:54:15.806559 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 22:54:15.806566 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 22:54:15.806573 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 22:54:15.806751 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 22:54:15.806818 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 22:54:15.806877 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 22:54:15.806933 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 22:54:15.806993 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 22:54:15.807003 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 22:54:15.807010 kernel: PCI host bridge to bus 0000:00 Nov 23 22:54:15.808750 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 22:54:15.808821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 22:54:15.808875 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 22:54:15.808928 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 22:54:15.809019 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 22:54:15.809095 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Nov 23 22:54:15.809157 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Nov 23 22:54:15.809216 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Nov 23 22:54:15.809284 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.809345 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Nov 23 22:54:15.809432 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 22:54:15.809495 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 22:54:15.809554 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Nov 23 22:54:15.810701 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.810811 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Nov 23 22:54:15.810875 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 22:54:15.810937 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 22:54:15.811016 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.811076 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Nov 23 22:54:15.811136 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 22:54:15.811194 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 22:54:15.811252 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Nov 23 22:54:15.811324 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.811406 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Nov 23 22:54:15.811472 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 22:54:15.811531 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 22:54:15.811590 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Nov 23 22:54:15.814771 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.814864 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Nov 23 22:54:15.814926 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 22:54:15.814985 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 22:54:15.815052 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Nov 23 22:54:15.815120 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.815181 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Nov 23 22:54:15.815240 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 22:54:15.815300 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Nov 23 22:54:15.815359 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Nov 23 22:54:15.815480 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.815571 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Nov 23 22:54:15.815691 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 22:54:15.815761 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Nov 23 22:54:15.815822 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Nov 23 22:54:15.815894 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.815954 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Nov 23 22:54:15.816018 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 22:54:15.816077 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Nov 23 22:54:15.816145 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:15.816206 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Nov 23 22:54:15.816282 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 22:54:15.816345 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 22:54:15.816428 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Nov 23 22:54:15.816494 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Nov 23 22:54:15.816566 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 22:54:15.816646 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Nov 23 22:54:15.816709 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 22:54:15.816770 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 22:54:15.816838 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 23 22:54:15.816898 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Nov 23 22:54:15.816974 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 23 22:54:15.817035 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Nov 23 22:54:15.817096 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Nov 23 22:54:15.817166 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 23 22:54:15.817228 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Nov 23 22:54:15.817300 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 23 22:54:15.817377 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Nov 23 22:54:15.817446 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Nov 23 22:54:15.817518 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 23 22:54:15.817579 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Nov 23 22:54:15.819785 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Nov 23 22:54:15.819900 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 22:54:15.819964 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Nov 23 22:54:15.820035 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Nov 23 22:54:15.820096 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 22:54:15.820162 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 23 22:54:15.820223 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 23 22:54:15.820284 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 23 22:54:15.820347 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 23 22:54:15.820458 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 23 22:54:15.820526 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 23 22:54:15.820591 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 23 22:54:15.820671 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 23 22:54:15.820736 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 23 22:54:15.820800 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 23 22:54:15.820860 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 23 22:54:15.820918 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 23 22:54:15.820986 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 23 22:54:15.821045 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 23 22:54:15.821102 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 23 22:54:15.821165 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 23 22:54:15.821224 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 23 22:54:15.821282 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 23 22:54:15.821347 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 23 22:54:15.821421 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 23 22:54:15.821482 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 23 22:54:15.821545 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 23 22:54:15.821604 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 23 22:54:15.826814 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 23 22:54:15.826913 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 23 22:54:15.826981 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 23 22:54:15.827042 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 23 22:54:15.827107 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Nov 23 22:54:15.827167 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Nov 23 22:54:15.827231 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Nov 23 22:54:15.827291 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Nov 23 22:54:15.827354 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Nov 23 22:54:15.827472 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Nov 23 22:54:15.827539 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Nov 23 22:54:15.827598 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Nov 23 22:54:15.827689 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Nov 23 22:54:15.827752 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Nov 23 22:54:15.827814 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Nov 23 22:54:15.827873 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Nov 23 22:54:15.827939 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Nov 23 22:54:15.828003 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Nov 23 22:54:15.828062 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Nov 23 22:54:15.828120 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Nov 23 22:54:15.828179 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Nov 23 22:54:15.828237 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Nov 23 22:54:15.828300 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Nov 23 22:54:15.828359 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Nov 23 22:54:15.828435 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Nov 23 22:54:15.828498 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 23 22:54:15.828558 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Nov 23 22:54:15.828616 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 23 22:54:15.830107 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Nov 23 22:54:15.830183 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 23 22:54:15.830247 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Nov 23 22:54:15.830307 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 23 22:54:15.830386 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Nov 23 22:54:15.830452 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 23 22:54:15.830515 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Nov 23 22:54:15.830575 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 23 22:54:15.830662 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Nov 23 22:54:15.830733 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 23 22:54:15.830796 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Nov 23 22:54:15.830855 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 23 22:54:15.830920 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Nov 23 22:54:15.830980 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Nov 23 22:54:15.831046 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Nov 23 22:54:15.831115 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Nov 23 22:54:15.831186 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 22:54:15.831251 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Nov 23 22:54:15.831312 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 22:54:15.831384 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 23 22:54:15.831448 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 22:54:15.831506 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 22:54:15.831570 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Nov 23 22:54:15.834062 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 22:54:15.834177 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 23 22:54:15.834239 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 23 22:54:15.834300 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 22:54:15.834419 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Nov 23 22:54:15.834493 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Nov 23 22:54:15.834556 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 22:54:15.834616 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 23 22:54:15.834718 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 23 22:54:15.834779 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 22:54:15.834850 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Nov 23 22:54:15.834913 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 22:54:15.834973 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 23 22:54:15.835032 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 23 22:54:15.835090 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 22:54:15.835160 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Nov 23 22:54:15.835221 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Nov 23 22:54:15.835282 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 22:54:15.835341 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 23 22:54:15.835416 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 22:54:15.835477 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 22:54:15.835543 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Nov 23 22:54:15.835606 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Nov 23 22:54:15.835767 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 22:54:15.835854 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 23 22:54:15.835914 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 22:54:15.835973 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 22:54:15.836040 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Nov 23 22:54:15.836102 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Nov 23 22:54:15.836165 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Nov 23 22:54:15.836227 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 22:54:15.836289 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 23 22:54:15.836347 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 22:54:15.836430 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 22:54:15.836496 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 22:54:15.836555 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 23 22:54:15.836613 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 22:54:15.836689 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 22:54:15.836753 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 22:54:15.836812 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 23 22:54:15.836869 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 22:54:15.836930 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 22:54:15.836993 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 22:54:15.837046 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 22:54:15.837098 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 22:54:15.837167 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 23 22:54:15.837223 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 23 22:54:15.837280 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 22:54:15.837343 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 23 22:54:15.837443 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 23 22:54:15.837503 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 22:54:15.837569 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 23 22:54:15.837645 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 23 22:54:15.837711 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 22:54:15.837777 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 23 22:54:15.837832 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 23 22:54:15.837885 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 22:54:15.837947 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 23 22:54:15.838001 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 23 22:54:15.838054 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 22:54:15.838120 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 23 22:54:15.838179 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 23 22:54:15.838232 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 22:54:15.838294 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 23 22:54:15.838348 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 23 22:54:15.838418 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 22:54:15.838486 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 23 22:54:15.838544 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 23 22:54:15.838598 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 22:54:15.839081 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 23 22:54:15.839171 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 23 22:54:15.839227 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 22:54:15.839243 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 22:54:15.839250 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 22:54:15.839260 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 22:54:15.839268 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 22:54:15.839275 kernel: iommu: Default domain type: Translated Nov 23 22:54:15.839283 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 22:54:15.839290 kernel: efivars: Registered efivars operations Nov 23 22:54:15.839297 kernel: vgaarb: loaded Nov 23 22:54:15.839305 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 22:54:15.839312 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 22:54:15.839320 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 22:54:15.839328 kernel: pnp: PnP ACPI init Nov 23 22:54:15.839454 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 22:54:15.839469 kernel: pnp: PnP ACPI: found 1 devices Nov 23 22:54:15.839476 kernel: NET: Registered PF_INET protocol family Nov 23 22:54:15.839484 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 22:54:15.839492 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 22:54:15.839500 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 22:54:15.839507 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 22:54:15.839517 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 22:54:15.839525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 22:54:15.839533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:54:15.839540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:54:15.839547 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 22:54:15.841676 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 23 22:54:15.841710 kernel: PCI: CLS 0 bytes, default 64 Nov 23 22:54:15.841718 kernel: kvm [1]: HYP mode not available Nov 23 22:54:15.841726 kernel: Initialise system trusted keyrings Nov 23 22:54:15.841740 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 22:54:15.841748 kernel: Key type asymmetric registered Nov 23 22:54:15.841755 kernel: Asymmetric key parser 'x509' registered Nov 23 22:54:15.841763 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 22:54:15.841771 kernel: io scheduler mq-deadline registered Nov 23 22:54:15.841778 kernel: io scheduler kyber registered Nov 23 22:54:15.841786 kernel: io scheduler bfq registered Nov 23 22:54:15.841794 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 22:54:15.841928 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 23 22:54:15.841996 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 23 22:54:15.842057 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.842121 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 23 22:54:15.842181 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 23 22:54:15.842240 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.842304 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 23 22:54:15.842403 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 23 22:54:15.842480 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.842552 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 23 22:54:15.842612 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 23 22:54:15.842698 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.842770 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 23 22:54:15.842830 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 23 22:54:15.842889 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.842952 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 23 22:54:15.843011 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 23 22:54:15.843074 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.843136 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 23 22:54:15.843198 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 23 22:54:15.843257 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.843319 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 23 22:54:15.843392 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 23 22:54:15.843453 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.843466 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 23 22:54:15.843529 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 23 22:54:15.843588 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 23 22:54:15.844124 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:15.844143 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 22:54:15.844151 kernel: ACPI: button: Power Button [PWRB] Nov 23 22:54:15.844159 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 22:54:15.844240 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 23 22:54:15.844308 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 23 22:54:15.844320 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 22:54:15.844328 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 22:54:15.844410 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 23 22:54:15.844423 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 23 22:54:15.844430 kernel: thunder_xcv, ver 1.0 Nov 23 22:54:15.844437 kernel: thunder_bgx, ver 1.0 Nov 23 22:54:15.844445 kernel: nicpf, ver 1.0 Nov 23 22:54:15.844452 kernel: nicvf, ver 1.0 Nov 23 22:54:15.844533 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 22:54:15.844593 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T22:54:15 UTC (1763938455) Nov 23 22:54:15.844603 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 22:54:15.844611 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 22:54:15.844618 kernel: watchdog: NMI not fully supported Nov 23 22:54:15.846678 kernel: watchdog: Hard watchdog permanently disabled Nov 23 22:54:15.846691 kernel: NET: Registered PF_INET6 protocol family Nov 23 22:54:15.846699 kernel: Segment Routing with IPv6 Nov 23 22:54:15.846708 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 22:54:15.846722 kernel: NET: Registered PF_PACKET protocol family Nov 23 22:54:15.846729 kernel: Key type dns_resolver registered Nov 23 22:54:15.846737 kernel: registered taskstats version 1 Nov 23 22:54:15.846744 kernel: Loading compiled-in X.509 certificates Nov 23 22:54:15.846752 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 22:54:15.846759 kernel: Demotion targets for Node 0: null Nov 23 22:54:15.846767 kernel: Key type .fscrypt registered Nov 23 22:54:15.846774 kernel: Key type fscrypt-provisioning registered Nov 23 22:54:15.846781 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 22:54:15.846790 kernel: ima: Allocated hash algorithm: sha1 Nov 23 22:54:15.846797 kernel: ima: No architecture policies found Nov 23 22:54:15.846805 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 22:54:15.846812 kernel: clk: Disabling unused clocks Nov 23 22:54:15.846819 kernel: PM: genpd: Disabling unused power domains Nov 23 22:54:15.846827 kernel: Warning: unable to open an initial console. Nov 23 22:54:15.846834 kernel: Freeing unused kernel memory: 39552K Nov 23 22:54:15.846841 kernel: Run /init as init process Nov 23 22:54:15.846849 kernel: with arguments: Nov 23 22:54:15.846858 kernel: /init Nov 23 22:54:15.846865 kernel: with environment: Nov 23 22:54:15.846872 kernel: HOME=/ Nov 23 22:54:15.846879 kernel: TERM=linux Nov 23 22:54:15.846888 systemd[1]: Successfully made /usr/ read-only. Nov 23 22:54:15.846900 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:54:15.846908 systemd[1]: Detected virtualization kvm. Nov 23 22:54:15.846917 systemd[1]: Detected architecture arm64. Nov 23 22:54:15.846925 systemd[1]: Running in initrd. Nov 23 22:54:15.846932 systemd[1]: No hostname configured, using default hostname. Nov 23 22:54:15.846941 systemd[1]: Hostname set to . Nov 23 22:54:15.846949 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:54:15.846956 systemd[1]: Queued start job for default target initrd.target. Nov 23 22:54:15.846964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:54:15.846972 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:54:15.846982 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 22:54:15.846991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:54:15.846999 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 22:54:15.847008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 22:54:15.847017 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 22:54:15.847025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 22:54:15.847033 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:54:15.847042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:54:15.847050 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:54:15.847058 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:54:15.847066 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:54:15.847074 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:54:15.847081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:54:15.847089 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:54:15.847097 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 22:54:15.847107 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 22:54:15.847116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:54:15.847124 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:54:15.847132 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:54:15.847140 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:54:15.847148 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 22:54:15.847156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:54:15.847164 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 22:54:15.847172 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 22:54:15.847181 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 22:54:15.847189 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:54:15.847197 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:54:15.847205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:15.847257 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 22:54:15.847278 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 22:54:15.847287 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:54:15.847295 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 22:54:15.847303 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:54:15.847313 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:15.847321 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 22:54:15.847329 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 22:54:15.847337 kernel: Bridge firewalling registered Nov 23 22:54:15.847345 systemd-journald[245]: Journal started Nov 23 22:54:15.847378 systemd-journald[245]: Runtime Journal (/run/log/journal/5d142911efd54f2d910c51a633b9fa51) is 8M, max 76.5M, 68.5M free. Nov 23 22:54:15.822277 systemd-modules-load[246]: Inserted module 'overlay' Nov 23 22:54:15.846637 systemd-modules-load[246]: Inserted module 'br_netfilter' Nov 23 22:54:15.849968 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:54:15.850659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:54:15.854484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:54:15.857908 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:54:15.859563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:54:15.872953 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:54:15.877775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:54:15.880809 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 22:54:15.884672 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:54:15.886779 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 22:54:15.894452 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:54:15.897675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:54:15.902169 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:54:15.918287 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:54:15.945009 systemd-resolved[288]: Positive Trust Anchors: Nov 23 22:54:15.945031 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:54:15.945062 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:54:15.950665 systemd-resolved[288]: Defaulting to hostname 'linux'. Nov 23 22:54:15.951777 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:54:15.952546 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:54:16.021686 kernel: SCSI subsystem initialized Nov 23 22:54:16.026683 kernel: Loading iSCSI transport class v2.0-870. Nov 23 22:54:16.034941 kernel: iscsi: registered transport (tcp) Nov 23 22:54:16.047703 kernel: iscsi: registered transport (qla4xxx) Nov 23 22:54:16.047809 kernel: QLogic iSCSI HBA Driver Nov 23 22:54:16.071398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:54:16.092546 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:54:16.094399 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:54:16.150709 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 22:54:16.153162 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 22:54:16.222845 kernel: raid6: neonx8 gen() 15659 MB/s Nov 23 22:54:16.239688 kernel: raid6: neonx4 gen() 15514 MB/s Nov 23 22:54:16.256668 kernel: raid6: neonx2 gen() 13013 MB/s Nov 23 22:54:16.273687 kernel: raid6: neonx1 gen() 10362 MB/s Nov 23 22:54:16.290681 kernel: raid6: int64x8 gen() 6851 MB/s Nov 23 22:54:16.307689 kernel: raid6: int64x4 gen() 7286 MB/s Nov 23 22:54:16.324759 kernel: raid6: int64x2 gen() 6048 MB/s Nov 23 22:54:16.341840 kernel: raid6: int64x1 gen() 4999 MB/s Nov 23 22:54:16.341969 kernel: raid6: using algorithm neonx8 gen() 15659 MB/s Nov 23 22:54:16.358692 kernel: raid6: .... xor() 11603 MB/s, rmw enabled Nov 23 22:54:16.358769 kernel: raid6: using neon recovery algorithm Nov 23 22:54:16.363954 kernel: xor: measuring software checksum speed Nov 23 22:54:16.364039 kernel: 8regs : 21539 MB/sec Nov 23 22:54:16.364059 kernel: 32regs : 21699 MB/sec Nov 23 22:54:16.364077 kernel: arm64_neon : 27240 MB/sec Nov 23 22:54:16.364679 kernel: xor: using function: arm64_neon (27240 MB/sec) Nov 23 22:54:16.421727 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 22:54:16.432502 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:54:16.436195 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:54:16.479955 systemd-udevd[495]: Using default interface naming scheme 'v255'. Nov 23 22:54:16.485472 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:54:16.490914 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 22:54:16.521913 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Nov 23 22:54:16.555440 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:54:16.558086 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:54:16.622600 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:54:16.627308 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 22:54:16.722649 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 23 22:54:16.733348 kernel: scsi host0: Virtio SCSI HBA Nov 23 22:54:16.745669 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 23 22:54:16.745753 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 23 22:54:16.753014 kernel: ACPI: bus type USB registered Nov 23 22:54:16.753106 kernel: usbcore: registered new interface driver usbfs Nov 23 22:54:16.755102 kernel: usbcore: registered new interface driver hub Nov 23 22:54:16.755829 kernel: usbcore: registered new device driver usb Nov 23 22:54:16.771570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:54:16.772345 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:16.774531 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:16.778872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:16.784658 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 23 22:54:16.787716 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 23 22:54:16.787911 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 23 22:54:16.787996 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 23 22:54:16.788068 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 23 22:54:16.800128 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 22:54:16.800190 kernel: GPT:17805311 != 80003071 Nov 23 22:54:16.800203 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 22:54:16.800215 kernel: GPT:17805311 != 80003071 Nov 23 22:54:16.800226 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 22:54:16.800238 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:16.802662 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 23 22:54:16.809671 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 22:54:16.809867 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 23 22:54:16.811005 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 23 22:54:16.811185 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 23 22:54:16.811736 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 22:54:16.812707 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 23 22:54:16.813892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:16.816719 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 23 22:54:16.818118 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 22:54:16.818326 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 23 22:54:16.818646 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 23 22:54:16.819735 kernel: hub 1-0:1.0: USB hub found Nov 23 22:54:16.819917 kernel: hub 1-0:1.0: 4 ports detected Nov 23 22:54:16.821232 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 23 22:54:16.822647 kernel: hub 2-0:1.0: USB hub found Nov 23 22:54:16.824641 kernel: hub 2-0:1.0: 4 ports detected Nov 23 22:54:16.872934 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 23 22:54:16.899146 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 23 22:54:16.899950 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 23 22:54:16.908523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 22:54:16.917460 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 23 22:54:16.920138 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 22:54:16.937156 disk-uuid[596]: Primary Header is updated. Nov 23 22:54:16.937156 disk-uuid[596]: Secondary Entries is updated. Nov 23 22:54:16.937156 disk-uuid[596]: Secondary Header is updated. Nov 23 22:54:16.949714 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:17.055705 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 23 22:54:17.091736 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 22:54:17.093112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:54:17.094179 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:54:17.095320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:54:17.097341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 22:54:17.127912 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:54:17.190717 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 23 22:54:17.190807 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 23 22:54:17.191056 kernel: usbcore: registered new interface driver usbhid Nov 23 22:54:17.192811 kernel: usbhid: USB HID core driver Nov 23 22:54:17.293660 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 23 22:54:17.420678 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 23 22:54:17.474119 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 23 22:54:17.974655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:17.976652 disk-uuid[598]: The operation has completed successfully. Nov 23 22:54:18.043482 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 22:54:18.043618 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 22:54:18.067608 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 22:54:18.093890 sh[628]: Success Nov 23 22:54:18.110943 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 22:54:18.111009 kernel: device-mapper: uevent: version 1.0.3 Nov 23 22:54:18.111020 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 22:54:18.119644 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 22:54:18.173651 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 22:54:18.178749 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 22:54:18.189897 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 22:54:18.199655 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (640) Nov 23 22:54:18.201641 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 22:54:18.201733 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:18.209127 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 22:54:18.209199 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 22:54:18.209226 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 22:54:18.212070 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 22:54:18.214143 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:54:18.215372 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 22:54:18.216291 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 22:54:18.219907 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 22:54:18.252655 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (671) Nov 23 22:54:18.254696 kernel: BTRFS info (device sda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:18.254764 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:18.259983 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 22:54:18.260063 kernel: BTRFS info (device sda6): turning on async discard Nov 23 22:54:18.260080 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 22:54:18.266667 kernel: BTRFS info (device sda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:18.267729 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 22:54:18.270851 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 22:54:18.363466 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:54:18.366277 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:54:18.413453 systemd-networkd[811]: lo: Link UP Nov 23 22:54:18.414677 systemd-networkd[811]: lo: Gained carrier Nov 23 22:54:18.416281 systemd-networkd[811]: Enumeration completed Nov 23 22:54:18.416499 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:54:18.416785 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:18.416789 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:18.417564 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:18.417567 systemd-networkd[811]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:18.418085 systemd-networkd[811]: eth0: Link UP Nov 23 22:54:18.418292 systemd-networkd[811]: eth1: Link UP Nov 23 22:54:18.418543 systemd-networkd[811]: eth0: Gained carrier Nov 23 22:54:18.418554 systemd-networkd[811]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:18.421799 systemd[1]: Reached target network.target - Network. Nov 23 22:54:18.425520 systemd-networkd[811]: eth1: Gained carrier Nov 23 22:54:18.425537 systemd-networkd[811]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:18.440043 ignition[718]: Ignition 2.22.0 Nov 23 22:54:18.440061 ignition[718]: Stage: fetch-offline Nov 23 22:54:18.440355 ignition[718]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:18.440367 ignition[718]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:18.447172 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:54:18.440494 ignition[718]: parsed url from cmdline: "" Nov 23 22:54:18.449209 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 22:54:18.440498 ignition[718]: no config URL provided Nov 23 22:54:18.450783 systemd-networkd[811]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 22:54:18.440503 ignition[718]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:54:18.440510 ignition[718]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:54:18.440516 ignition[718]: failed to fetch config: resource requires networking Nov 23 22:54:18.440784 ignition[718]: Ignition finished successfully Nov 23 22:54:18.469795 systemd-networkd[811]: eth0: DHCPv4 address 91.98.91.202/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 22:54:18.486673 ignition[820]: Ignition 2.22.0 Nov 23 22:54:18.486687 ignition[820]: Stage: fetch Nov 23 22:54:18.486839 ignition[820]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:18.486849 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:18.486924 ignition[820]: parsed url from cmdline: "" Nov 23 22:54:18.486927 ignition[820]: no config URL provided Nov 23 22:54:18.486932 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:54:18.486939 ignition[820]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:54:18.486968 ignition[820]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 23 22:54:18.493712 ignition[820]: GET result: OK Nov 23 22:54:18.493816 ignition[820]: parsing config with SHA512: 2aaa92868db74afd91000d27e84e695e6f28505da50c8e642b326b97fbece9c7befbd4f8620377794506f511cd922b7673c8e1c0741eaa241f673b95c84d7647 Nov 23 22:54:18.498942 unknown[820]: fetched base config from "system" Nov 23 22:54:18.499608 unknown[820]: fetched base config from "system" Nov 23 22:54:18.500116 unknown[820]: fetched user config from "hetzner" Nov 23 22:54:18.501926 ignition[820]: fetch: fetch complete Nov 23 22:54:18.503083 ignition[820]: fetch: fetch passed Nov 23 22:54:18.503187 ignition[820]: Ignition finished successfully Nov 23 22:54:18.507098 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 22:54:18.508933 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 22:54:18.553571 ignition[827]: Ignition 2.22.0 Nov 23 22:54:18.553591 ignition[827]: Stage: kargs Nov 23 22:54:18.553770 ignition[827]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:18.553781 ignition[827]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:18.554943 ignition[827]: kargs: kargs passed Nov 23 22:54:18.555001 ignition[827]: Ignition finished successfully Nov 23 22:54:18.557248 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 22:54:18.561809 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 22:54:18.603986 ignition[834]: Ignition 2.22.0 Nov 23 22:54:18.604002 ignition[834]: Stage: disks Nov 23 22:54:18.604408 ignition[834]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:18.604420 ignition[834]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:18.605529 ignition[834]: disks: disks passed Nov 23 22:54:18.605595 ignition[834]: Ignition finished successfully Nov 23 22:54:18.609897 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 22:54:18.610939 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 22:54:18.611615 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 22:54:18.612793 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:54:18.613759 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:54:18.614714 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:54:18.616569 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 22:54:18.657542 systemd-fsck[842]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 23 22:54:18.661974 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 22:54:18.667235 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 22:54:18.741687 kernel: EXT4-fs (sda9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 22:54:18.743866 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 22:54:18.746440 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 22:54:18.749849 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:54:18.754698 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 22:54:18.760475 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 23 22:54:18.763522 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 22:54:18.765186 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:54:18.768165 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 22:54:18.773745 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 22:54:18.778049 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (850) Nov 23 22:54:18.780671 kernel: BTRFS info (device sda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:18.780726 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:18.790655 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 22:54:18.790711 kernel: BTRFS info (device sda6): turning on async discard Nov 23 22:54:18.791742 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 22:54:18.798965 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:54:18.838679 initrd-setup-root[877]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 22:54:18.842023 coreos-metadata[852]: Nov 23 22:54:18.841 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 23 22:54:18.845133 coreos-metadata[852]: Nov 23 22:54:18.844 INFO Fetch successful Nov 23 22:54:18.845133 coreos-metadata[852]: Nov 23 22:54:18.844 INFO wrote hostname ci-4459-1-2-3-c3120372ad to /sysroot/etc/hostname Nov 23 22:54:18.850825 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 22:54:18.853368 initrd-setup-root[884]: cut: /sysroot/etc/group: No such file or directory Nov 23 22:54:18.859036 initrd-setup-root[892]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 22:54:18.865423 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 22:54:18.962315 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 22:54:18.965758 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 22:54:18.968385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 22:54:18.983656 kernel: BTRFS info (device sda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:19.001526 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 22:54:19.017721 ignition[968]: INFO : Ignition 2.22.0 Nov 23 22:54:19.017721 ignition[968]: INFO : Stage: mount Nov 23 22:54:19.019545 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:19.019545 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:19.019545 ignition[968]: INFO : mount: mount passed Nov 23 22:54:19.019545 ignition[968]: INFO : Ignition finished successfully Nov 23 22:54:19.020496 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 22:54:19.024767 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 22:54:19.201077 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 22:54:19.205131 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:54:19.238805 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Nov 23 22:54:19.240333 kernel: BTRFS info (device sda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:19.240407 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:19.244756 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 22:54:19.244846 kernel: BTRFS info (device sda6): turning on async discard Nov 23 22:54:19.244870 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 22:54:19.249110 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:54:19.286270 ignition[996]: INFO : Ignition 2.22.0 Nov 23 22:54:19.286270 ignition[996]: INFO : Stage: files Nov 23 22:54:19.287350 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:19.287350 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:19.287350 ignition[996]: DEBUG : files: compiled without relabeling support, skipping Nov 23 22:54:19.289681 ignition[996]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 22:54:19.289681 ignition[996]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 22:54:19.293814 ignition[996]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 22:54:19.295215 ignition[996]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 22:54:19.296726 unknown[996]: wrote ssh authorized keys file for user: core Nov 23 22:54:19.297514 ignition[996]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 22:54:19.301091 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:54:19.302566 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 23 22:54:19.396896 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 22:54:19.468652 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 23 22:54:19.468652 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 22:54:19.468652 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 22:54:19.468652 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:54:19.468652 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:54:19.468652 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:54:19.478082 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 23 22:54:19.768647 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 22:54:19.830845 systemd-networkd[811]: eth0: Gained IPv6LL Nov 23 22:54:20.022890 systemd-networkd[811]: eth1: Gained IPv6LL Nov 23 22:54:20.445279 ignition[996]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 23 22:54:20.445279 ignition[996]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 22:54:20.450335 ignition[996]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 23 22:54:20.454976 ignition[996]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 22:54:20.464973 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:54:20.464973 ignition[996]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:54:20.464973 ignition[996]: INFO : files: files passed Nov 23 22:54:20.464973 ignition[996]: INFO : Ignition finished successfully Nov 23 22:54:20.461100 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 22:54:20.463974 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 22:54:20.467896 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 22:54:20.488993 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 22:54:20.489190 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 22:54:20.496166 initrd-setup-root-after-ignition[1025]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:54:20.496166 initrd-setup-root-after-ignition[1025]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:54:20.498451 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:54:20.501310 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:54:20.502504 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 22:54:20.506196 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 22:54:20.589084 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 22:54:20.589262 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 22:54:20.590923 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 22:54:20.592235 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 22:54:20.593564 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 22:54:20.595791 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 22:54:20.625797 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:54:20.630477 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 22:54:20.650479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:54:20.652834 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:54:20.654128 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 22:54:20.655518 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 22:54:20.655777 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:54:20.658701 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 22:54:20.660496 systemd[1]: Stopped target basic.target - Basic System. Nov 23 22:54:20.661324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 22:54:20.662210 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:54:20.663280 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 22:54:20.664425 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:54:20.665348 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 22:54:20.666234 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:54:20.667328 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 22:54:20.668267 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 22:54:20.669196 systemd[1]: Stopped target swap.target - Swaps. Nov 23 22:54:20.669976 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 22:54:20.670161 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:54:20.671263 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:54:20.672331 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:54:20.673378 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 22:54:20.677331 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:54:20.678213 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 22:54:20.678421 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 22:54:20.680425 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 22:54:20.680641 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:54:20.681913 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 22:54:20.682075 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 22:54:20.682855 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 23 22:54:20.683049 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 22:54:20.685847 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 22:54:20.689038 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 22:54:20.689607 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 22:54:20.690790 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:54:20.692992 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 22:54:20.693171 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:54:20.702808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 22:54:20.702919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 22:54:20.715642 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 22:54:20.720547 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 22:54:20.721361 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 22:54:20.727945 ignition[1049]: INFO : Ignition 2.22.0 Nov 23 22:54:20.727945 ignition[1049]: INFO : Stage: umount Nov 23 22:54:20.727945 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:20.727945 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:20.731448 ignition[1049]: INFO : umount: umount passed Nov 23 22:54:20.731448 ignition[1049]: INFO : Ignition finished successfully Nov 23 22:54:20.733189 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 22:54:20.733356 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 22:54:20.734885 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 22:54:20.734960 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 22:54:20.736136 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 22:54:20.736193 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 22:54:20.737200 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 22:54:20.737251 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 22:54:20.739152 systemd[1]: Stopped target network.target - Network. Nov 23 22:54:20.740066 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 22:54:20.740148 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:54:20.741164 systemd[1]: Stopped target paths.target - Path Units. Nov 23 22:54:20.742160 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 22:54:20.745774 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:54:20.748237 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 22:54:20.749589 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 22:54:20.750813 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 22:54:20.750868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:54:20.751907 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 22:54:20.751938 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:54:20.752851 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 22:54:20.752912 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 22:54:20.753942 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 22:54:20.753980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 22:54:20.754964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 22:54:20.755011 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 22:54:20.756249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 22:54:20.757136 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 22:54:20.762741 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 22:54:20.762897 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 22:54:20.767304 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 22:54:20.767589 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 22:54:20.767735 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 22:54:20.772762 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 22:54:20.774237 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 22:54:20.775062 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 22:54:20.775109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:54:20.778435 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 22:54:20.779003 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 22:54:20.779083 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:54:20.781863 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 22:54:20.781931 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:54:20.784804 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 22:54:20.784870 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 22:54:20.786852 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 22:54:20.786918 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:54:20.788330 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:54:20.791827 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 22:54:20.791908 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:54:20.804068 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 22:54:20.805130 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:54:20.806570 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 22:54:20.809518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 22:54:20.811824 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 22:54:20.811887 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:54:20.815099 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 22:54:20.815217 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:54:20.817503 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 22:54:20.817605 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 22:54:20.818445 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 22:54:20.818507 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:54:20.821732 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 22:54:20.822312 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 22:54:20.822373 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:54:20.832432 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 22:54:20.832549 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:54:20.836803 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 22:54:20.836893 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:54:20.839729 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 22:54:20.839860 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:54:20.841931 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:54:20.842072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:20.846984 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 22:54:20.847055 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 22:54:20.847084 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 22:54:20.847115 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:54:20.847486 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 22:54:20.848669 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 22:54:20.851222 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 22:54:20.851367 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 22:54:20.853455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 22:54:20.855247 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 22:54:20.881075 systemd[1]: Switching root. Nov 23 22:54:20.917913 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 22:54:20.918034 systemd-journald[245]: Journal stopped Nov 23 22:54:22.044174 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 22:54:22.044254 kernel: SELinux: policy capability open_perms=1 Nov 23 22:54:22.044265 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 22:54:22.044291 kernel: SELinux: policy capability always_check_network=0 Nov 23 22:54:22.044300 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 22:54:22.044309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 22:54:22.044319 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 22:54:22.044331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 22:54:22.044340 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 22:54:22.044350 kernel: audit: type=1403 audit(1763938461.175:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 22:54:22.044365 systemd[1]: Successfully loaded SELinux policy in 77.443ms. Nov 23 22:54:22.044386 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.425ms. Nov 23 22:54:22.044396 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:54:22.044407 systemd[1]: Detected virtualization kvm. Nov 23 22:54:22.044418 systemd[1]: Detected architecture arm64. Nov 23 22:54:22.044429 systemd[1]: Detected first boot. Nov 23 22:54:22.044441 systemd[1]: Hostname set to . Nov 23 22:54:22.044451 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:54:22.044464 zram_generator::config[1092]: No configuration found. Nov 23 22:54:22.044475 kernel: NET: Registered PF_VSOCK protocol family Nov 23 22:54:22.044490 systemd[1]: Populated /etc with preset unit settings. Nov 23 22:54:22.044504 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 22:54:22.044516 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 22:54:22.044526 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 22:54:22.044537 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 22:54:22.044547 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 22:54:22.044558 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 22:54:22.044569 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 22:54:22.044579 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 22:54:22.044591 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 22:54:22.044601 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 22:54:22.044610 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 22:54:22.045377 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 22:54:22.045416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:54:22.045428 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:54:22.045439 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 22:54:22.045458 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 22:54:22.045469 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 22:54:22.045487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:54:22.045498 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 22:54:22.045508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:54:22.045519 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:54:22.045529 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 22:54:22.045539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 22:54:22.045555 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 22:54:22.045568 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 22:54:22.045580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:54:22.045591 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:54:22.045603 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:54:22.045613 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:54:22.045640 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 22:54:22.045652 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 22:54:22.045662 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 22:54:22.045676 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:54:22.045687 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:54:22.045697 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:54:22.045707 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 22:54:22.045717 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 22:54:22.045727 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 22:54:22.045739 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 22:54:22.045748 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 22:54:22.045759 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 22:54:22.045770 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 22:54:22.045783 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 22:54:22.045793 systemd[1]: Reached target machines.target - Containers. Nov 23 22:54:22.045803 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 22:54:22.045814 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:22.045824 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:54:22.045834 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 22:54:22.045844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:54:22.045855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:54:22.045867 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:54:22.045877 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 22:54:22.045887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:54:22.045899 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 22:54:22.045909 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 22:54:22.045919 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 22:54:22.045929 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 22:54:22.045940 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 22:54:22.045951 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:22.045961 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:54:22.045971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:54:22.045981 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:54:22.045997 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 22:54:22.046010 kernel: loop: module loaded Nov 23 22:54:22.046023 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 22:54:22.046035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:54:22.046045 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 22:54:22.046059 systemd[1]: Stopped verity-setup.service. Nov 23 22:54:22.046069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 22:54:22.046079 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 22:54:22.046089 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 22:54:22.046099 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 22:54:22.046110 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 22:54:22.046119 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 22:54:22.046129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:54:22.046140 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 22:54:22.046151 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 22:54:22.046161 kernel: ACPI: bus type drm_connector registered Nov 23 22:54:22.046172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:54:22.046182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:54:22.046192 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:54:22.046202 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:54:22.046212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:54:22.046222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:54:22.046232 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:54:22.046247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:54:22.046256 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:54:22.046266 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:54:22.046287 kernel: fuse: init (API version 7.41) Nov 23 22:54:22.046299 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:54:22.046311 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 22:54:22.046321 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:54:22.046331 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:54:22.046381 systemd-journald[1160]: Collecting audit messages is disabled. Nov 23 22:54:22.046417 systemd-journald[1160]: Journal started Nov 23 22:54:22.046439 systemd-journald[1160]: Runtime Journal (/run/log/journal/5d142911efd54f2d910c51a633b9fa51) is 8M, max 76.5M, 68.5M free. Nov 23 22:54:21.714653 systemd[1]: Queued start job for default target multi-user.target. Nov 23 22:54:22.055131 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:54:22.055160 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:54:21.739584 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 23 22:54:21.740313 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 22:54:22.051059 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 22:54:22.051314 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 22:54:22.054330 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 22:54:22.057377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 22:54:22.063762 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 22:54:22.064689 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 22:54:22.083601 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 22:54:22.083666 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:54:22.085405 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 22:54:22.090484 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 22:54:22.092846 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:22.095292 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 22:54:22.097890 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 22:54:22.099872 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:54:22.102247 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 22:54:22.105881 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 22:54:22.112921 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:54:22.116332 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Nov 23 22:54:22.116348 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Nov 23 22:54:22.126761 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:54:22.132667 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 22:54:22.137103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:54:22.144663 kernel: loop0: detected capacity change from 0 to 119840 Nov 23 22:54:22.144699 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 22:54:22.146791 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 22:54:22.149597 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 22:54:22.152018 systemd-journald[1160]: Time spent on flushing to /var/log/journal/5d142911efd54f2d910c51a633b9fa51 is 50.754ms for 1181 entries. Nov 23 22:54:22.152018 systemd-journald[1160]: System Journal (/var/log/journal/5d142911efd54f2d910c51a633b9fa51) is 8M, max 584.8M, 576.8M free. Nov 23 22:54:22.236579 systemd-journald[1160]: Received client request to flush runtime journal. Nov 23 22:54:22.236660 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 22:54:22.236686 kernel: loop1: detected capacity change from 0 to 211168 Nov 23 22:54:22.231004 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 22:54:22.238442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:54:22.242944 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 22:54:22.253485 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 22:54:22.276657 kernel: loop2: detected capacity change from 0 to 8 Nov 23 22:54:22.287531 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Nov 23 22:54:22.287554 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. Nov 23 22:54:22.293759 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:54:22.297708 kernel: loop3: detected capacity change from 0 to 100632 Nov 23 22:54:22.340947 kernel: loop4: detected capacity change from 0 to 119840 Nov 23 22:54:22.358658 kernel: loop5: detected capacity change from 0 to 211168 Nov 23 22:54:22.380663 kernel: loop6: detected capacity change from 0 to 8 Nov 23 22:54:22.386653 kernel: loop7: detected capacity change from 0 to 100632 Nov 23 22:54:22.407397 (sd-merge)[1237]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 23 22:54:22.408169 (sd-merge)[1237]: Merged extensions into '/usr'. Nov 23 22:54:22.414103 systemd[1]: Reload requested from client PID 1215 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 22:54:22.414120 systemd[1]: Reloading... Nov 23 22:54:22.512684 zram_generator::config[1263]: No configuration found. Nov 23 22:54:22.688918 ldconfig[1211]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 22:54:22.703595 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 22:54:22.704155 systemd[1]: Reloading finished in 289 ms. Nov 23 22:54:22.719063 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 22:54:22.720535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 22:54:22.732814 systemd[1]: Starting ensure-sysext.service... Nov 23 22:54:22.737863 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:54:22.753952 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 22:54:22.770780 systemd[1]: Reload requested from client PID 1300 ('systemctl') (unit ensure-sysext.service)... Nov 23 22:54:22.770801 systemd[1]: Reloading... Nov 23 22:54:22.791205 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 22:54:22.791248 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 22:54:22.791581 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 22:54:22.792853 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 22:54:22.793500 systemd-tmpfiles[1301]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 22:54:22.794140 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Nov 23 22:54:22.794199 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Nov 23 22:54:22.797705 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:54:22.797715 systemd-tmpfiles[1301]: Skipping /boot Nov 23 22:54:22.808249 systemd-tmpfiles[1301]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:54:22.809906 systemd-tmpfiles[1301]: Skipping /boot Nov 23 22:54:22.849944 zram_generator::config[1330]: No configuration found. Nov 23 22:54:23.023915 systemd[1]: Reloading finished in 252 ms. Nov 23 22:54:23.048966 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 22:54:23.058349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:54:23.059289 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 22:54:23.068857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:54:23.079881 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 22:54:23.084526 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 22:54:23.089937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:54:23.094970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:54:23.101998 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 22:54:23.111407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:23.114339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:54:23.119137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:54:23.122034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:54:23.123956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:23.124109 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:23.130112 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 22:54:23.132692 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:23.132869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:23.132959 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:23.137846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:23.142069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:54:23.143073 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:23.143769 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:23.152089 systemd[1]: Finished ensure-sysext.service. Nov 23 22:54:23.158951 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 22:54:23.163582 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 22:54:23.188808 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 22:54:23.199575 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 22:54:23.206400 systemd-udevd[1375]: Using default interface naming scheme 'v255'. Nov 23 22:54:23.216048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:54:23.217710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:54:23.221157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:54:23.221369 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:54:23.224889 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:54:23.226190 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:54:23.232596 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:54:23.232974 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:54:23.234665 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:54:23.235307 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:54:23.241071 augenrules[1409]: No rules Nov 23 22:54:23.245271 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:54:23.246058 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:54:23.257069 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 22:54:23.268796 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:54:23.274052 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:54:23.279685 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 22:54:23.280609 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:54:23.283125 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 22:54:23.436194 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 22:54:23.544854 systemd-networkd[1417]: lo: Link UP Nov 23 22:54:23.544865 systemd-networkd[1417]: lo: Gained carrier Nov 23 22:54:23.547342 systemd-networkd[1417]: Enumeration completed Nov 23 22:54:23.547516 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:54:23.547826 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:23.547830 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:23.548842 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:23.548854 systemd-networkd[1417]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:23.549889 systemd-networkd[1417]: eth0: Link UP Nov 23 22:54:23.550020 systemd-networkd[1417]: eth0: Gained carrier Nov 23 22:54:23.550039 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:23.551554 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 22:54:23.554770 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 22:54:23.554980 systemd-networkd[1417]: eth1: Link UP Nov 23 22:54:23.555988 systemd-networkd[1417]: eth1: Gained carrier Nov 23 22:54:23.556014 systemd-networkd[1417]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:23.564087 kernel: mousedev: PS/2 mouse device common for all mice Nov 23 22:54:23.576734 systemd-networkd[1417]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 22:54:23.582953 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 22:54:23.602788 systemd-networkd[1417]: eth0: DHCPv4 address 91.98.91.202/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 22:54:23.647951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 22:54:23.651831 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 22:54:23.690165 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 22:54:23.706327 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 22:54:23.707952 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 22:54:23.722325 systemd-resolved[1373]: Positive Trust Anchors: Nov 23 22:54:23.722350 systemd-resolved[1373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:54:23.722383 systemd-resolved[1373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:54:23.729781 systemd-resolved[1373]: Using system hostname 'ci-4459-1-2-3-c3120372ad'. Nov 23 22:54:23.731517 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:54:23.732814 systemd[1]: Reached target network.target - Network. Nov 23 22:54:23.733579 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:54:23.734721 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:54:23.736797 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 22:54:23.739383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 22:54:23.742010 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 22:54:23.744424 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 22:54:23.746361 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 22:54:23.747569 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 22:54:23.747740 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:54:23.748428 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:54:23.753886 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 22:54:23.754149 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 23 22:54:23.754177 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 23 22:54:23.754199 kernel: [drm] features: -context_init Nov 23 22:54:23.755172 kernel: [drm] number of scanouts: 1 Nov 23 22:54:23.756399 kernel: [drm] number of cap sets: 0 Nov 23 22:54:23.757655 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 23 22:54:23.758000 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 22:54:23.762406 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 22:54:23.769755 kernel: Console: switching to colour frame buffer device 160x50 Nov 23 22:54:23.776912 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 22:54:23.780001 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 22:54:23.785711 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 23 22:54:23.789065 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 22:54:23.790710 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 22:54:23.792555 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 22:54:23.795187 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 23 22:54:23.795234 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:54:23.796778 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:54:23.797899 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:54:23.797930 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:54:23.800296 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 22:54:23.804885 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 22:54:23.808187 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 22:54:23.812006 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 22:54:23.820444 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 22:54:23.824285 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 22:54:23.824916 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 22:54:23.829384 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 22:54:23.834329 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 22:54:23.839929 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 23 22:54:23.847970 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 22:54:23.856789 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 22:54:23.862036 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 22:54:23.864506 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 22:54:23.865100 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 22:54:23.869649 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 22:54:23.873377 systemd-timesyncd[1391]: Contacted time server 178.63.67.56:123 (0.flatcar.pool.ntp.org). Nov 23 22:54:23.873779 systemd-timesyncd[1391]: Initial clock synchronization to Sun 2025-11-23 22:54:23.747526 UTC. Nov 23 22:54:23.876671 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 22:54:23.881149 coreos-metadata[1488]: Nov 23 22:54:23.880 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 23 22:54:23.882919 coreos-metadata[1488]: Nov 23 22:54:23.882 INFO Fetch successful Nov 23 22:54:23.883567 coreos-metadata[1488]: Nov 23 22:54:23.883 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 23 22:54:23.889023 coreos-metadata[1488]: Nov 23 22:54:23.885 INFO Fetch successful Nov 23 22:54:23.886709 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 22:54:23.890656 jq[1491]: false Nov 23 22:54:23.894131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 22:54:23.897735 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 22:54:23.914155 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 22:54:23.914434 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 22:54:23.930009 extend-filesystems[1492]: Found /dev/sda6 Nov 23 22:54:23.932872 jq[1504]: true Nov 23 22:54:23.947653 extend-filesystems[1492]: Found /dev/sda9 Nov 23 22:54:23.965946 extend-filesystems[1492]: Checking size of /dev/sda9 Nov 23 22:54:23.972940 tar[1508]: linux-arm64/LICENSE Nov 23 22:54:23.972940 tar[1508]: linux-arm64/helm Nov 23 22:54:23.978931 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 22:54:23.979211 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 22:54:23.987656 jq[1529]: true Nov 23 22:54:23.988059 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 22:54:23.993991 dbus-daemon[1489]: [system] SELinux support is enabled Nov 23 22:54:23.995220 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 22:54:24.002571 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 22:54:24.002617 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 22:54:24.005748 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 22:54:24.005779 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 22:54:24.012641 update_engine[1502]: I20251123 22:54:24.012040 1502 main.cc:92] Flatcar Update Engine starting Nov 23 22:54:24.023030 extend-filesystems[1492]: Resized partition /dev/sda9 Nov 23 22:54:24.027464 systemd[1]: Started update-engine.service - Update Engine. Nov 23 22:54:24.027865 update_engine[1502]: I20251123 22:54:24.027505 1502 update_check_scheduler.cc:74] Next update check in 7m45s Nov 23 22:54:24.029816 extend-filesystems[1549]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 22:54:24.049644 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 23 22:54:24.063903 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 22:54:24.125786 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 22:54:24.129079 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 22:54:24.152419 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:54:24.156205 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 22:54:24.161222 systemd[1]: Starting sshkeys.service... Nov 23 22:54:24.178645 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 23 22:54:24.190165 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:24.194921 extend-filesystems[1549]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 23 22:54:24.194921 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 23 22:54:24.194921 extend-filesystems[1549]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 23 22:54:24.198787 extend-filesystems[1492]: Resized filesystem in /dev/sda9 Nov 23 22:54:24.198984 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 22:54:24.200692 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 22:54:24.228767 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 22:54:24.233452 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 22:54:24.363539 coreos-metadata[1582]: Nov 23 22:54:24.363 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 23 22:54:24.367452 coreos-metadata[1582]: Nov 23 22:54:24.366 INFO Fetch successful Nov 23 22:54:24.374716 unknown[1582]: wrote ssh authorized keys file for user: core Nov 23 22:54:24.402262 systemd-logind[1501]: New seat seat0. Nov 23 22:54:24.407828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:54:24.408078 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:24.408413 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 22:54:24.408432 systemd-logind[1501]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 23 22:54:24.439014 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:54:24.439422 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 22:54:24.451632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:24.465967 update-ssh-keys[1589]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:54:24.466619 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 22:54:24.474654 systemd[1]: Finished sshkeys.service. Nov 23 22:54:24.552202 containerd[1533]: time="2025-11-23T22:54:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 22:54:24.556649 containerd[1533]: time="2025-11-23T22:54:24.556087083Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 22:54:24.588068 containerd[1533]: time="2025-11-23T22:54:24.588021422Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.221µs" Nov 23 22:54:24.589297 containerd[1533]: time="2025-11-23T22:54:24.589251303Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 22:54:24.589591 containerd[1533]: time="2025-11-23T22:54:24.589576510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 22:54:24.589815 containerd[1533]: time="2025-11-23T22:54:24.589795808Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 22:54:24.589924 containerd[1533]: time="2025-11-23T22:54:24.589907741Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 22:54:24.590796 containerd[1533]: time="2025-11-23T22:54:24.590770091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:54:24.592101 containerd[1533]: time="2025-11-23T22:54:24.591257665Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.593457418Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.593784043Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.593803059Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.593816603Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.593824989Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.593905779Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.594090745Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.594118305Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.594128896Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 22:54:24.595011 containerd[1533]: time="2025-11-23T22:54:24.594169724Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 22:54:24.596231 containerd[1533]: time="2025-11-23T22:54:24.596195574Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 22:54:24.596699 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:24.599732 containerd[1533]: time="2025-11-23T22:54:24.599527213Z" level=info msg="metadata content store policy set" policy=shared Nov 23 22:54:24.606004 containerd[1533]: time="2025-11-23T22:54:24.605923712Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 22:54:24.606004 containerd[1533]: time="2025-11-23T22:54:24.606007533Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 22:54:24.606117 containerd[1533]: time="2025-11-23T22:54:24.606025959Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 22:54:24.606117 containerd[1533]: time="2025-11-23T22:54:24.606039896Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 22:54:24.606117 containerd[1533]: time="2025-11-23T22:54:24.606056117Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 22:54:24.606117 containerd[1533]: time="2025-11-23T22:54:24.606068834Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 22:54:24.606117 containerd[1533]: time="2025-11-23T22:54:24.606083323Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 22:54:24.606217 containerd[1533]: time="2025-11-23T22:54:24.606123245Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 22:54:24.606217 containerd[1533]: time="2025-11-23T22:54:24.606137891Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 22:54:24.606217 containerd[1533]: time="2025-11-23T22:54:24.606149270Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 22:54:24.606217 containerd[1533]: time="2025-11-23T22:54:24.606158995Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 22:54:24.606217 containerd[1533]: time="2025-11-23T22:54:24.606173050Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 22:54:24.606342 containerd[1533]: time="2025-11-23T22:54:24.606317228Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 22:54:24.606373 containerd[1533]: time="2025-11-23T22:54:24.606348213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 22:54:24.606373 containerd[1533]: time="2025-11-23T22:54:24.606365969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 22:54:24.606404 containerd[1533]: time="2025-11-23T22:54:24.606377072Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 22:54:24.606404 containerd[1533]: time="2025-11-23T22:54:24.606388726Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 22:54:24.606404 containerd[1533]: time="2025-11-23T22:54:24.606399474Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 22:54:24.606453 containerd[1533]: time="2025-11-23T22:54:24.606411010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 22:54:24.606453 containerd[1533]: time="2025-11-23T22:54:24.606422546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 22:54:24.606453 containerd[1533]: time="2025-11-23T22:54:24.606434279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 22:54:24.606453 containerd[1533]: time="2025-11-23T22:54:24.606447744Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 22:54:24.606518 containerd[1533]: time="2025-11-23T22:54:24.606459358Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 22:54:24.608637 containerd[1533]: time="2025-11-23T22:54:24.607782785Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 22:54:24.608637 containerd[1533]: time="2025-11-23T22:54:24.607915860Z" level=info msg="Start snapshots syncer" Nov 23 22:54:24.608637 containerd[1533]: time="2025-11-23T22:54:24.607953932Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 22:54:24.613837 containerd[1533]: time="2025-11-23T22:54:24.610041005Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 22:54:24.613837 containerd[1533]: time="2025-11-23T22:54:24.610106558Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610178726Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610367590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610397906Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610417670Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610437632Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610455742Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610467829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610479090Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610568777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610592164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610605944Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610651103Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610668347Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:54:24.614022 containerd[1533]: time="2025-11-23T22:54:24.610678623Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610688624Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610705789Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610718506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610729215Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610818943Z" level=info msg="runtime interface created" Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610826620Z" level=info msg="created NRI interface" Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610835597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610849770Z" level=info msg="Connect containerd service" Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.610875952Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 22:54:24.614292 containerd[1533]: time="2025-11-23T22:54:24.612005987Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735727552Z" level=info msg="Start subscribing containerd event" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735811492Z" level=info msg="Start recovering state" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735905432Z" level=info msg="Start event monitor" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735917991Z" level=info msg="Start cni network conf syncer for default" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735925865Z" level=info msg="Start streaming server" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735936692Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735944527Z" level=info msg="runtime interface starting up..." Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735950039Z" level=info msg="starting plugins..." Nov 23 22:54:24.736064 containerd[1533]: time="2025-11-23T22:54:24.735963386Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 22:54:24.740708 containerd[1533]: time="2025-11-23T22:54:24.737755686Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 22:54:24.740708 containerd[1533]: time="2025-11-23T22:54:24.737824822Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 22:54:24.740708 containerd[1533]: time="2025-11-23T22:54:24.737893801Z" level=info msg="containerd successfully booted in 0.187639s" Nov 23 22:54:24.738008 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 22:54:24.761332 locksmithd[1550]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 22:54:24.875255 tar[1508]: linux-arm64/README.md Nov 23 22:54:24.895124 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 22:54:25.334793 systemd-networkd[1417]: eth0: Gained IPv6LL Nov 23 22:54:25.340781 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 22:54:25.345255 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 22:54:25.351873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:25.353930 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 22:54:25.405845 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 22:54:25.527754 systemd-networkd[1417]: eth1: Gained IPv6LL Nov 23 22:54:25.693695 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 22:54:25.719709 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 22:54:25.723746 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 22:54:25.749210 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 22:54:25.749493 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 22:54:25.754954 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 22:54:25.779506 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 22:54:25.783694 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 22:54:25.786327 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 22:54:25.787469 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 22:54:26.232635 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:26.235261 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 22:54:26.239250 systemd[1]: Startup finished in 2.421s (kernel) + 5.531s (initrd) + 5.142s (userspace) = 13.095s. Nov 23 22:54:26.245304 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:26.804869 kubelet[1660]: E1123 22:54:26.804805 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:26.810884 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:26.811146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:26.812156 systemd[1]: kubelet.service: Consumed 917ms CPU time, 258.2M memory peak. Nov 23 22:54:36.869296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 22:54:36.872682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:37.042553 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:37.053231 (kubelet)[1679]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:37.097502 kubelet[1679]: E1123 22:54:37.097412 1679 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:37.104136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:37.104287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:37.104905 systemd[1]: kubelet.service: Consumed 174ms CPU time, 105.4M memory peak. Nov 23 22:54:47.119130 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 22:54:47.123147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:47.403825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:47.415285 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:47.463391 kubelet[1694]: E1123 22:54:47.463291 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:47.466231 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:47.466368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:47.466964 systemd[1]: kubelet.service: Consumed 182ms CPU time, 107.1M memory peak. Nov 23 22:54:55.237873 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 22:54:55.240366 systemd[1]: Started sshd@0-91.98.91.202:22-139.178.89.65:44008.service - OpenSSH per-connection server daemon (139.178.89.65:44008). Nov 23 22:54:56.313348 sshd[1702]: Accepted publickey for core from 139.178.89.65 port 44008 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:54:56.318561 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:54:56.331543 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 22:54:56.332897 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 22:54:56.337882 systemd-logind[1501]: New session 1 of user core. Nov 23 22:54:56.366568 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 22:54:56.370038 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 22:54:56.384407 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 22:54:56.387955 systemd-logind[1501]: New session c1 of user core. Nov 23 22:54:56.517543 systemd[1707]: Queued start job for default target default.target. Nov 23 22:54:56.525461 systemd[1707]: Created slice app.slice - User Application Slice. Nov 23 22:54:56.525513 systemd[1707]: Reached target paths.target - Paths. Nov 23 22:54:56.525731 systemd[1707]: Reached target timers.target - Timers. Nov 23 22:54:56.527704 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 22:54:56.553489 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 22:54:56.553651 systemd[1707]: Reached target sockets.target - Sockets. Nov 23 22:54:56.553708 systemd[1707]: Reached target basic.target - Basic System. Nov 23 22:54:56.553739 systemd[1707]: Reached target default.target - Main User Target. Nov 23 22:54:56.553766 systemd[1707]: Startup finished in 158ms. Nov 23 22:54:56.554089 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 22:54:56.561877 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 22:54:57.274539 systemd[1]: Started sshd@1-91.98.91.202:22-139.178.89.65:44014.service - OpenSSH per-connection server daemon (139.178.89.65:44014). Nov 23 22:54:57.618925 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 22:54:57.623813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:57.793308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:57.808280 (kubelet)[1729]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:57.853216 kubelet[1729]: E1123 22:54:57.853141 1729 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:57.857465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:57.857805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:57.858586 systemd[1]: kubelet.service: Consumed 179ms CPU time, 104.6M memory peak. Nov 23 22:54:58.271947 sshd[1718]: Accepted publickey for core from 139.178.89.65 port 44014 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:54:58.273099 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:54:58.279161 systemd-logind[1501]: New session 2 of user core. Nov 23 22:54:58.284958 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 22:54:58.950355 sshd[1736]: Connection closed by 139.178.89.65 port 44014 Nov 23 22:54:58.951478 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Nov 23 22:54:58.958168 systemd[1]: sshd@1-91.98.91.202:22-139.178.89.65:44014.service: Deactivated successfully. Nov 23 22:54:58.960998 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 22:54:58.962685 systemd-logind[1501]: Session 2 logged out. Waiting for processes to exit. Nov 23 22:54:58.964593 systemd-logind[1501]: Removed session 2. Nov 23 22:54:59.117247 systemd[1]: Started sshd@2-91.98.91.202:22-139.178.89.65:44022.service - OpenSSH per-connection server daemon (139.178.89.65:44022). Nov 23 22:55:00.122253 sshd[1742]: Accepted publickey for core from 139.178.89.65 port 44022 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:00.124663 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:00.134147 systemd-logind[1501]: New session 3 of user core. Nov 23 22:55:00.140014 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 22:55:00.793659 sshd[1745]: Connection closed by 139.178.89.65 port 44022 Nov 23 22:55:00.792707 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:00.799064 systemd[1]: sshd@2-91.98.91.202:22-139.178.89.65:44022.service: Deactivated successfully. Nov 23 22:55:00.801012 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 22:55:00.803300 systemd-logind[1501]: Session 3 logged out. Waiting for processes to exit. Nov 23 22:55:00.805053 systemd-logind[1501]: Removed session 3. Nov 23 22:55:00.968008 systemd[1]: Started sshd@3-91.98.91.202:22-139.178.89.65:52014.service - OpenSSH per-connection server daemon (139.178.89.65:52014). Nov 23 22:55:01.964400 sshd[1751]: Accepted publickey for core from 139.178.89.65 port 52014 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:01.966538 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:01.974812 systemd-logind[1501]: New session 4 of user core. Nov 23 22:55:01.980933 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 22:55:02.640657 sshd[1754]: Connection closed by 139.178.89.65 port 52014 Nov 23 22:55:02.641327 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:02.645853 systemd[1]: sshd@3-91.98.91.202:22-139.178.89.65:52014.service: Deactivated successfully. Nov 23 22:55:02.648264 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 22:55:02.650281 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. Nov 23 22:55:02.653199 systemd-logind[1501]: Removed session 4. Nov 23 22:55:02.841809 systemd[1]: Started sshd@4-91.98.91.202:22-139.178.89.65:52024.service - OpenSSH per-connection server daemon (139.178.89.65:52024). Nov 23 22:55:03.925100 sshd[1760]: Accepted publickey for core from 139.178.89.65 port 52024 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:03.927246 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:03.935728 systemd-logind[1501]: New session 5 of user core. Nov 23 22:55:03.946917 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 22:55:04.494711 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 22:55:04.495449 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:04.512044 sudo[1764]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:04.683703 sshd[1763]: Connection closed by 139.178.89.65 port 52024 Nov 23 22:55:04.684260 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:04.689049 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. Nov 23 22:55:04.689349 systemd[1]: sshd@4-91.98.91.202:22-139.178.89.65:52024.service: Deactivated successfully. Nov 23 22:55:04.691134 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 22:55:04.694091 systemd-logind[1501]: Removed session 5. Nov 23 22:55:04.837230 systemd[1]: Started sshd@5-91.98.91.202:22-139.178.89.65:52032.service - OpenSSH per-connection server daemon (139.178.89.65:52032). Nov 23 22:55:05.814490 sshd[1770]: Accepted publickey for core from 139.178.89.65 port 52032 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:05.816679 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:05.825511 systemd-logind[1501]: New session 6 of user core. Nov 23 22:55:05.832953 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 22:55:06.330867 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 22:55:06.331547 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:06.339721 sudo[1775]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:06.345884 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 22:55:06.346149 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:06.357755 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:55:06.417343 augenrules[1797]: No rules Nov 23 22:55:06.419289 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:55:06.420713 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:55:06.422582 sudo[1774]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:06.578539 sshd[1773]: Connection closed by 139.178.89.65 port 52032 Nov 23 22:55:06.579472 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:06.586599 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. Nov 23 22:55:06.586810 systemd[1]: sshd@5-91.98.91.202:22-139.178.89.65:52032.service: Deactivated successfully. Nov 23 22:55:06.590023 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 22:55:06.594976 systemd-logind[1501]: Removed session 6. Nov 23 22:55:06.749030 systemd[1]: Started sshd@6-91.98.91.202:22-139.178.89.65:52042.service - OpenSSH per-connection server daemon (139.178.89.65:52042). Nov 23 22:55:07.744162 sshd[1806]: Accepted publickey for core from 139.178.89.65 port 52042 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:07.746568 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:07.752971 systemd-logind[1501]: New session 7 of user core. Nov 23 22:55:07.762214 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 22:55:07.868449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 23 22:55:07.870026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:08.046974 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:08.063310 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:55:08.110427 kubelet[1818]: E1123 22:55:08.110369 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:55:08.113833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:55:08.114164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:55:08.115855 systemd[1]: kubelet.service: Consumed 172ms CPU time, 107.1M memory peak. Nov 23 22:55:08.260034 sudo[1826]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 22:55:08.260611 sudo[1826]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:08.600771 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 22:55:08.630589 (dockerd)[1844]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 22:55:08.866236 dockerd[1844]: time="2025-11-23T22:55:08.866109348Z" level=info msg="Starting up" Nov 23 22:55:08.867669 dockerd[1844]: time="2025-11-23T22:55:08.867607058Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 22:55:08.883877 dockerd[1844]: time="2025-11-23T22:55:08.883754211Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 22:55:08.923498 dockerd[1844]: time="2025-11-23T22:55:08.923224493Z" level=info msg="Loading containers: start." Nov 23 22:55:08.935653 kernel: Initializing XFRM netlink socket Nov 23 22:55:09.144109 update_engine[1502]: I20251123 22:55:09.144059 1502 update_attempter.cc:509] Updating boot flags... Nov 23 22:55:09.308967 systemd-networkd[1417]: docker0: Link UP Nov 23 22:55:09.325140 dockerd[1844]: time="2025-11-23T22:55:09.325088730Z" level=info msg="Loading containers: done." Nov 23 22:55:09.367140 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1564255577-merged.mount: Deactivated successfully. Nov 23 22:55:09.374360 dockerd[1844]: time="2025-11-23T22:55:09.374311832Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 22:55:09.374533 dockerd[1844]: time="2025-11-23T22:55:09.374400716Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 22:55:09.374533 dockerd[1844]: time="2025-11-23T22:55:09.374487640Z" level=info msg="Initializing buildkit" Nov 23 22:55:09.419902 dockerd[1844]: time="2025-11-23T22:55:09.419195422Z" level=info msg="Completed buildkit initialization" Nov 23 22:55:09.426980 dockerd[1844]: time="2025-11-23T22:55:09.426855482Z" level=info msg="Daemon has completed initialization" Nov 23 22:55:09.427331 dockerd[1844]: time="2025-11-23T22:55:09.427196417Z" level=info msg="API listen on /run/docker.sock" Nov 23 22:55:09.428793 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 22:55:10.719232 containerd[1533]: time="2025-11-23T22:55:10.719189212Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\"" Nov 23 22:55:11.296547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705050818.mount: Deactivated successfully. Nov 23 22:55:12.308367 containerd[1533]: time="2025-11-23T22:55:12.308284491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:12.311319 containerd[1533]: time="2025-11-23T22:55:12.310726224Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.6: active requests=0, bytes read=27385802" Nov 23 22:55:12.311751 containerd[1533]: time="2025-11-23T22:55:12.311689221Z" level=info msg="ImageCreate event name:\"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:12.315958 containerd[1533]: time="2025-11-23T22:55:12.315868181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:12.317388 containerd[1533]: time="2025-11-23T22:55:12.317121069Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.6\" with image id \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7c1fe7a61835371b6f42e1acbd87ecc4c456930785ae652e3ce7bcecf8cd4d9c\", size \"27382303\" in 1.597883054s" Nov 23 22:55:12.317388 containerd[1533]: time="2025-11-23T22:55:12.317177431Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.6\" returns image reference \"sha256:1c07507521b1e5dd5a677080f11565aeed667ca44a4119fe6fc7e9452e84707f\"" Nov 23 22:55:12.319157 containerd[1533]: time="2025-11-23T22:55:12.319059943Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\"" Nov 23 22:55:13.639978 containerd[1533]: time="2025-11-23T22:55:13.639914752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:13.641381 containerd[1533]: time="2025-11-23T22:55:13.641076434Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.6: active requests=0, bytes read=23551844" Nov 23 22:55:13.642226 containerd[1533]: time="2025-11-23T22:55:13.642181835Z" level=info msg="ImageCreate event name:\"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:13.646093 containerd[1533]: time="2025-11-23T22:55:13.646049975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:13.648130 containerd[1533]: time="2025-11-23T22:55:13.648070529Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.6\" with image id \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fb1f45370081166f032a2ed3d41deaccc6bb277b4d9841d4aaebad7aada930c5\", size \"25136308\" in 1.328767297s" Nov 23 22:55:13.648454 containerd[1533]: time="2025-11-23T22:55:13.648306098Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.6\" returns image reference \"sha256:0e8db523b16722887ebe961048a14cebe9778389b0045fc9e461ca509bed1758\"" Nov 23 22:55:13.649419 containerd[1533]: time="2025-11-23T22:55:13.649259052Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\"" Nov 23 22:55:14.773187 containerd[1533]: time="2025-11-23T22:55:14.773124302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:14.775776 containerd[1533]: time="2025-11-23T22:55:14.775717352Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.6: active requests=0, bytes read=18296716" Nov 23 22:55:14.777097 containerd[1533]: time="2025-11-23T22:55:14.777010756Z" level=info msg="ImageCreate event name:\"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:14.781659 containerd[1533]: time="2025-11-23T22:55:14.781125419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:14.782746 containerd[1533]: time="2025-11-23T22:55:14.782348302Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.6\" with image id \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:02bfac33158a2323cd2d4ba729cb9d7be695b172be21dfd3740e4a608d39a378\", size \"19881198\" in 1.133038687s" Nov 23 22:55:14.782746 containerd[1533]: time="2025-11-23T22:55:14.782388623Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.6\" returns image reference \"sha256:4845d8bf054bc037c94329f9ce2fa5bb3a972aefc81d9412e9bd8c5ecc311e80\"" Nov 23 22:55:14.783179 containerd[1533]: time="2025-11-23T22:55:14.783156290Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\"" Nov 23 22:55:15.737458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201553883.mount: Deactivated successfully. Nov 23 22:55:16.053370 containerd[1533]: time="2025-11-23T22:55:16.053002241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:16.054292 containerd[1533]: time="2025-11-23T22:55:16.054013673Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.6: active requests=0, bytes read=28257795" Nov 23 22:55:16.055347 containerd[1533]: time="2025-11-23T22:55:16.055295354Z" level=info msg="ImageCreate event name:\"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:16.059141 containerd[1533]: time="2025-11-23T22:55:16.059097834Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:16.059781 containerd[1533]: time="2025-11-23T22:55:16.059737174Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.6\" with image id \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:9119bd7ae5249b9d8bdd14a7719a0ebf744de112fe618008adca3094a12b67fc\", size \"28256788\" in 1.275918022s" Nov 23 22:55:16.059781 containerd[1533]: time="2025-11-23T22:55:16.059777816Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.6\" returns image reference \"sha256:3edf3fc935ecf2058786113d0a0f95daa919e82f6505e8e3df7b5226ebfedb6b\"" Nov 23 22:55:16.060302 containerd[1533]: time="2025-11-23T22:55:16.060257111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 23 22:55:16.655143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3126629384.mount: Deactivated successfully. Nov 23 22:55:17.482667 containerd[1533]: time="2025-11-23T22:55:17.481804567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:17.483343 containerd[1533]: time="2025-11-23T22:55:17.483288572Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152209" Nov 23 22:55:17.484457 containerd[1533]: time="2025-11-23T22:55:17.484403326Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:17.489297 containerd[1533]: time="2025-11-23T22:55:17.489247912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:17.491369 containerd[1533]: time="2025-11-23T22:55:17.490757518Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.430445566s" Nov 23 22:55:17.491369 containerd[1533]: time="2025-11-23T22:55:17.490813240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 23 22:55:17.491640 containerd[1533]: time="2025-11-23T22:55:17.491440419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 22:55:18.013192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251048898.mount: Deactivated successfully. Nov 23 22:55:18.021326 containerd[1533]: time="2025-11-23T22:55:18.020197441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:55:18.021326 containerd[1533]: time="2025-11-23T22:55:18.021278592Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 23 22:55:18.021929 containerd[1533]: time="2025-11-23T22:55:18.021900290Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:55:18.023864 containerd[1533]: time="2025-11-23T22:55:18.023813225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:55:18.024669 containerd[1533]: time="2025-11-23T22:55:18.024643169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 533.170469ms" Nov 23 22:55:18.024778 containerd[1533]: time="2025-11-23T22:55:18.024762293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 22:55:18.025597 containerd[1533]: time="2025-11-23T22:55:18.025550716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 23 22:55:18.118431 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 23 22:55:18.120188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:18.276921 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:18.286288 (kubelet)[2211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:55:18.337684 kubelet[2211]: E1123 22:55:18.337242 2211 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:55:18.340616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:55:18.340997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:55:18.341605 systemd[1]: kubelet.service: Consumed 172ms CPU time, 107.6M memory peak. Nov 23 22:55:18.560896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670418238.mount: Deactivated successfully. Nov 23 22:55:20.032101 containerd[1533]: time="2025-11-23T22:55:20.032022992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:20.034211 containerd[1533]: time="2025-11-23T22:55:20.034162569Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013713" Nov 23 22:55:20.036273 containerd[1533]: time="2025-11-23T22:55:20.034694823Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:20.038344 containerd[1533]: time="2025-11-23T22:55:20.038305559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:20.040042 containerd[1533]: time="2025-11-23T22:55:20.040007844Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.014399567s" Nov 23 22:55:20.040186 containerd[1533]: time="2025-11-23T22:55:20.040165249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 23 22:55:26.720575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:26.721026 systemd[1]: kubelet.service: Consumed 172ms CPU time, 107.6M memory peak. Nov 23 22:55:26.725574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:26.761606 systemd[1]: Reload requested from client PID 2301 ('systemctl') (unit session-7.scope)... Nov 23 22:55:26.761804 systemd[1]: Reloading... Nov 23 22:55:26.885657 zram_generator::config[2351]: No configuration found. Nov 23 22:55:27.080897 systemd[1]: Reloading finished in 318 ms. Nov 23 22:55:27.136311 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 22:55:27.136588 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 22:55:27.137039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:27.138710 systemd[1]: kubelet.service: Consumed 113ms CPU time, 95M memory peak. Nov 23 22:55:27.141460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:27.297956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:27.316619 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:55:27.363677 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:27.363677 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:55:27.363677 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:27.363677 kubelet[2393]: I1123 22:55:27.363099 2393 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:55:28.083737 kubelet[2393]: I1123 22:55:28.083663 2393 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 22:55:28.083737 kubelet[2393]: I1123 22:55:28.083707 2393 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:55:28.084073 kubelet[2393]: I1123 22:55:28.084031 2393 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:55:28.121809 kubelet[2393]: E1123 22:55:28.121728 2393 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://91.98.91.202:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 91.98.91.202:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 23 22:55:28.125803 kubelet[2393]: I1123 22:55:28.125768 2393 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:55:28.138340 kubelet[2393]: I1123 22:55:28.138308 2393 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:55:28.141652 kubelet[2393]: I1123 22:55:28.141608 2393 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:55:28.143654 kubelet[2393]: I1123 22:55:28.143572 2393 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:55:28.143955 kubelet[2393]: I1123 22:55:28.143782 2393 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-2-3-c3120372ad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:55:28.144142 kubelet[2393]: I1123 22:55:28.144128 2393 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:55:28.144261 kubelet[2393]: I1123 22:55:28.144249 2393 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 22:55:28.144521 kubelet[2393]: I1123 22:55:28.144505 2393 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:28.148302 kubelet[2393]: I1123 22:55:28.148271 2393 kubelet.go:480] "Attempting to sync node with API server" Nov 23 22:55:28.148683 kubelet[2393]: I1123 22:55:28.148522 2393 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:55:28.148683 kubelet[2393]: I1123 22:55:28.148562 2393 kubelet.go:386] "Adding apiserver pod source" Nov 23 22:55:28.150812 kubelet[2393]: I1123 22:55:28.150787 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:55:28.156408 kubelet[2393]: E1123 22:55:28.156047 2393 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://91.98.91.202:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-2-3-c3120372ad&limit=500&resourceVersion=0\": dial tcp 91.98.91.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 23 22:55:28.158264 kubelet[2393]: E1123 22:55:28.158214 2393 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://91.98.91.202:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 91.98.91.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 23 22:55:28.158354 kubelet[2393]: I1123 22:55:28.158331 2393 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:55:28.159085 kubelet[2393]: I1123 22:55:28.159051 2393 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:55:28.159228 kubelet[2393]: W1123 22:55:28.159210 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 22:55:28.162653 kubelet[2393]: I1123 22:55:28.162611 2393 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:55:28.162794 kubelet[2393]: I1123 22:55:28.162783 2393 server.go:1289] "Started kubelet" Nov 23 22:55:28.165275 kubelet[2393]: I1123 22:55:28.165249 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:55:28.169007 kubelet[2393]: E1123 22:55:28.167645 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://91.98.91.202:6443/api/v1/namespaces/default/events\": dial tcp 91.98.91.202:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-2-3-c3120372ad.187ac4bee0c3fc44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-2-3-c3120372ad,UID:ci-4459-1-2-3-c3120372ad,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-2-3-c3120372ad,},FirstTimestamp:2025-11-23 22:55:28.162741316 +0000 UTC m=+0.840196249,LastTimestamp:2025-11-23 22:55:28.162741316 +0000 UTC m=+0.840196249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-2-3-c3120372ad,}" Nov 23 22:55:28.170149 kubelet[2393]: I1123 22:55:28.169990 2393 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:55:28.171609 kubelet[2393]: I1123 22:55:28.171270 2393 server.go:317] "Adding debug handlers to kubelet server" Nov 23 22:55:28.176693 kubelet[2393]: I1123 22:55:28.176665 2393 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:55:28.177390 kubelet[2393]: E1123 22:55:28.177358 2393 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-2-3-c3120372ad\" not found" Nov 23 22:55:28.177554 kubelet[2393]: I1123 22:55:28.177480 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:55:28.177872 kubelet[2393]: I1123 22:55:28.177841 2393 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:55:28.178121 kubelet[2393]: I1123 22:55:28.178093 2393 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:55:28.179998 kubelet[2393]: I1123 22:55:28.179961 2393 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:55:28.180284 kubelet[2393]: I1123 22:55:28.180263 2393 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:55:28.180774 kubelet[2393]: E1123 22:55:28.180715 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.91.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-2-3-c3120372ad?timeout=10s\": dial tcp 91.98.91.202:6443: connect: connection refused" interval="200ms" Nov 23 22:55:28.190804 kubelet[2393]: I1123 22:55:28.190777 2393 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:55:28.190947 kubelet[2393]: I1123 22:55:28.190936 2393 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:55:28.191112 kubelet[2393]: I1123 22:55:28.191092 2393 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:55:28.197571 kubelet[2393]: I1123 22:55:28.197501 2393 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 22:55:28.198656 kubelet[2393]: I1123 22:55:28.198600 2393 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 22:55:28.198656 kubelet[2393]: I1123 22:55:28.198642 2393 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 22:55:28.198727 kubelet[2393]: I1123 22:55:28.198684 2393 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:55:28.198727 kubelet[2393]: I1123 22:55:28.198693 2393 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 22:55:28.198767 kubelet[2393]: E1123 22:55:28.198738 2393 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:55:28.217667 kubelet[2393]: E1123 22:55:28.217300 2393 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.98.91.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.98.91.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:55:28.219303 kubelet[2393]: E1123 22:55:28.218838 2393 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://91.98.91.202:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 91.98.91.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 23 22:55:28.223810 kubelet[2393]: I1123 22:55:28.223606 2393 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:55:28.223810 kubelet[2393]: I1123 22:55:28.223772 2393 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:55:28.224039 kubelet[2393]: I1123 22:55:28.223809 2393 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:28.226103 kubelet[2393]: I1123 22:55:28.226033 2393 policy_none.go:49] "None policy: Start" Nov 23 22:55:28.226103 kubelet[2393]: I1123 22:55:28.226069 2393 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:55:28.226103 kubelet[2393]: I1123 22:55:28.226082 2393 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:55:28.237091 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 22:55:28.252959 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 22:55:28.256717 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 22:55:28.276825 kubelet[2393]: E1123 22:55:28.276774 2393 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:55:28.277119 kubelet[2393]: I1123 22:55:28.277094 2393 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:55:28.277215 kubelet[2393]: I1123 22:55:28.277123 2393 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:55:28.278500 kubelet[2393]: I1123 22:55:28.278248 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:55:28.282538 kubelet[2393]: E1123 22:55:28.282514 2393 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:55:28.282695 kubelet[2393]: E1123 22:55:28.282559 2393 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-2-3-c3120372ad\" not found" Nov 23 22:55:28.313945 systemd[1]: Created slice kubepods-burstable-podfa94ac08355d39971268696a7fb9edf4.slice - libcontainer container kubepods-burstable-podfa94ac08355d39971268696a7fb9edf4.slice. Nov 23 22:55:28.341717 kubelet[2393]: E1123 22:55:28.341565 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.346471 systemd[1]: Created slice kubepods-burstable-pod5f920499c5cb752701c17121d0287295.slice - libcontainer container kubepods-burstable-pod5f920499c5cb752701c17121d0287295.slice. Nov 23 22:55:28.349811 kubelet[2393]: E1123 22:55:28.349771 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.352949 systemd[1]: Created slice kubepods-burstable-podb6c2e338ead85c56512ef79dc49688ee.slice - libcontainer container kubepods-burstable-podb6c2e338ead85c56512ef79dc49688ee.slice. Nov 23 22:55:28.355523 kubelet[2393]: E1123 22:55:28.355463 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.380724 kubelet[2393]: I1123 22:55:28.380609 2393 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.381525 kubelet[2393]: E1123 22:55:28.381421 2393 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.98.91.202:6443/api/v1/nodes\": dial tcp 91.98.91.202:6443: connect: connection refused" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.381596 kubelet[2393]: E1123 22:55:28.381537 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.91.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-2-3-c3120372ad?timeout=10s\": dial tcp 91.98.91.202:6443: connect: connection refused" interval="400ms" Nov 23 22:55:28.482374 kubelet[2393]: I1123 22:55:28.481866 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482374 kubelet[2393]: I1123 22:55:28.481954 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482374 kubelet[2393]: I1123 22:55:28.482014 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6c2e338ead85c56512ef79dc49688ee-kubeconfig\") pod \"kube-scheduler-ci-4459-1-2-3-c3120372ad\" (UID: \"b6c2e338ead85c56512ef79dc49688ee\") " pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482374 kubelet[2393]: I1123 22:55:28.482052 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa94ac08355d39971268696a7fb9edf4-ca-certs\") pod \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" (UID: \"fa94ac08355d39971268696a7fb9edf4\") " pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482374 kubelet[2393]: I1123 22:55:28.482104 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa94ac08355d39971268696a7fb9edf4-k8s-certs\") pod \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" (UID: \"fa94ac08355d39971268696a7fb9edf4\") " pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482686 kubelet[2393]: I1123 22:55:28.482140 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa94ac08355d39971268696a7fb9edf4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" (UID: \"fa94ac08355d39971268696a7fb9edf4\") " pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482686 kubelet[2393]: I1123 22:55:28.482224 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-ca-certs\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482686 kubelet[2393]: I1123 22:55:28.482284 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.482686 kubelet[2393]: I1123 22:55:28.482320 2393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.584602 kubelet[2393]: I1123 22:55:28.584418 2393 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.585133 kubelet[2393]: E1123 22:55:28.585091 2393 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.98.91.202:6443/api/v1/nodes\": dial tcp 91.98.91.202:6443: connect: connection refused" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.644126 containerd[1533]: time="2025-11-23T22:55:28.643970113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-2-3-c3120372ad,Uid:fa94ac08355d39971268696a7fb9edf4,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:28.651224 containerd[1533]: time="2025-11-23T22:55:28.651134894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-2-3-c3120372ad,Uid:5f920499c5cb752701c17121d0287295,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:28.657556 containerd[1533]: time="2025-11-23T22:55:28.657403817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-2-3-c3120372ad,Uid:b6c2e338ead85c56512ef79dc49688ee,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:28.686579 containerd[1533]: time="2025-11-23T22:55:28.686520991Z" level=info msg="connecting to shim 789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec" address="unix:///run/containerd/s/a1889ba13f566dd61208b1c04a1e06f23db967b92de70d7df8215e3fce00c996" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:28.700918 containerd[1533]: time="2025-11-23T22:55:28.700779391Z" level=info msg="connecting to shim 4fc6f4ff59e056b3347a366cfe19ced243a01afd7b6ce54389a5e13cd6913f75" address="unix:///run/containerd/s/702019e9de71aa737f18268d7df9e7d0ff2d4828bb66c93ac874b1a918343d5d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:28.720224 containerd[1533]: time="2025-11-23T22:55:28.720154173Z" level=info msg="connecting to shim f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1" address="unix:///run/containerd/s/5f8041db99278a701057ceca395d8cfd11b6ae4e28635a54e42c9291d9262e8e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:28.746230 systemd[1]: Started cri-containerd-4fc6f4ff59e056b3347a366cfe19ced243a01afd7b6ce54389a5e13cd6913f75.scope - libcontainer container 4fc6f4ff59e056b3347a366cfe19ced243a01afd7b6ce54389a5e13cd6913f75. Nov 23 22:55:28.752767 systemd[1]: Started cri-containerd-789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec.scope - libcontainer container 789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec. Nov 23 22:55:28.778853 systemd[1]: Started cri-containerd-f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1.scope - libcontainer container f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1. Nov 23 22:55:28.782231 kubelet[2393]: E1123 22:55:28.782178 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://91.98.91.202:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-2-3-c3120372ad?timeout=10s\": dial tcp 91.98.91.202:6443: connect: connection refused" interval="800ms" Nov 23 22:55:28.827531 containerd[1533]: time="2025-11-23T22:55:28.827416405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-2-3-c3120372ad,Uid:5f920499c5cb752701c17121d0287295,Namespace:kube-system,Attempt:0,} returns sandbox id \"789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec\"" Nov 23 22:55:28.837961 containerd[1533]: time="2025-11-23T22:55:28.837909692Z" level=info msg="CreateContainer within sandbox \"789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 22:55:28.853474 containerd[1533]: time="2025-11-23T22:55:28.853388316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-2-3-c3120372ad,Uid:fa94ac08355d39971268696a7fb9edf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fc6f4ff59e056b3347a366cfe19ced243a01afd7b6ce54389a5e13cd6913f75\"" Nov 23 22:55:28.863062 containerd[1533]: time="2025-11-23T22:55:28.863002826Z" level=info msg="CreateContainer within sandbox \"4fc6f4ff59e056b3347a366cfe19ced243a01afd7b6ce54389a5e13cd6913f75\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 22:55:28.867484 containerd[1533]: time="2025-11-23T22:55:28.867342551Z" level=info msg="Container 1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:28.871940 containerd[1533]: time="2025-11-23T22:55:28.871879841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-2-3-c3120372ad,Uid:b6c2e338ead85c56512ef79dc49688ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1\"" Nov 23 22:55:28.878288 containerd[1533]: time="2025-11-23T22:55:28.878167804Z" level=info msg="CreateContainer within sandbox \"f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 22:55:28.879540 containerd[1533]: time="2025-11-23T22:55:28.879505751Z" level=info msg="Container 0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:28.881696 containerd[1533]: time="2025-11-23T22:55:28.881611712Z" level=info msg="CreateContainer within sandbox \"789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba\"" Nov 23 22:55:28.882541 containerd[1533]: time="2025-11-23T22:55:28.882496010Z" level=info msg="StartContainer for \"1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba\"" Nov 23 22:55:28.884264 containerd[1533]: time="2025-11-23T22:55:28.884227404Z" level=info msg="connecting to shim 1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba" address="unix:///run/containerd/s/a1889ba13f566dd61208b1c04a1e06f23db967b92de70d7df8215e3fce00c996" protocol=ttrpc version=3 Nov 23 22:55:28.891607 containerd[1533]: time="2025-11-23T22:55:28.891505907Z" level=info msg="Container 5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:28.892421 containerd[1533]: time="2025-11-23T22:55:28.892361364Z" level=info msg="CreateContainer within sandbox \"4fc6f4ff59e056b3347a366cfe19ced243a01afd7b6ce54389a5e13cd6913f75\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2\"" Nov 23 22:55:28.895783 containerd[1533]: time="2025-11-23T22:55:28.894754211Z" level=info msg="StartContainer for \"0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2\"" Nov 23 22:55:28.900933 containerd[1533]: time="2025-11-23T22:55:28.900875852Z" level=info msg="CreateContainer within sandbox \"f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd\"" Nov 23 22:55:28.901443 containerd[1533]: time="2025-11-23T22:55:28.901392142Z" level=info msg="connecting to shim 0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2" address="unix:///run/containerd/s/702019e9de71aa737f18268d7df9e7d0ff2d4828bb66c93ac874b1a918343d5d" protocol=ttrpc version=3 Nov 23 22:55:28.902412 containerd[1533]: time="2025-11-23T22:55:28.902366601Z" level=info msg="StartContainer for \"5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd\"" Nov 23 22:55:28.903521 containerd[1533]: time="2025-11-23T22:55:28.903478143Z" level=info msg="connecting to shim 5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd" address="unix:///run/containerd/s/5f8041db99278a701057ceca395d8cfd11b6ae4e28635a54e42c9291d9262e8e" protocol=ttrpc version=3 Nov 23 22:55:28.922905 systemd[1]: Started cri-containerd-1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba.scope - libcontainer container 1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba. Nov 23 22:55:28.948900 systemd[1]: Started cri-containerd-0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2.scope - libcontainer container 0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2. Nov 23 22:55:28.956869 systemd[1]: Started cri-containerd-5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd.scope - libcontainer container 5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd. Nov 23 22:55:28.991096 kubelet[2393]: I1123 22:55:28.991055 2393 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:28.991529 kubelet[2393]: E1123 22:55:28.991473 2393 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://91.98.91.202:6443/api/v1/nodes\": dial tcp 91.98.91.202:6443: connect: connection refused" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:29.017798 containerd[1533]: time="2025-11-23T22:55:29.016800725Z" level=info msg="StartContainer for \"1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba\" returns successfully" Nov 23 22:55:29.054071 containerd[1533]: time="2025-11-23T22:55:29.053983953Z" level=info msg="StartContainer for \"0a254cd05e8ad5554f49fde73b9959b0b8553479ca69dfdc18c3306a871298b2\" returns successfully" Nov 23 22:55:29.067286 containerd[1533]: time="2025-11-23T22:55:29.067206205Z" level=info msg="StartContainer for \"5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd\" returns successfully" Nov 23 22:55:29.079339 kubelet[2393]: E1123 22:55:29.079287 2393 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://91.98.91.202:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 91.98.91.202:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 23 22:55:29.231078 kubelet[2393]: E1123 22:55:29.230961 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:29.236472 kubelet[2393]: E1123 22:55:29.236043 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:29.240661 kubelet[2393]: E1123 22:55:29.240286 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:29.795751 kubelet[2393]: I1123 22:55:29.795113 2393 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:30.242422 kubelet[2393]: E1123 22:55:30.242372 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:30.242883 kubelet[2393]: E1123 22:55:30.242738 2393 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:31.807802 kubelet[2393]: E1123 22:55:31.807751 2393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-2-3-c3120372ad\" not found" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:31.905261 kubelet[2393]: I1123 22:55:31.905159 2393 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:31.979411 kubelet[2393]: I1123 22:55:31.978894 2393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.014791 kubelet[2393]: E1123 22:55:32.014750 2393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.014986 kubelet[2393]: I1123 22:55:32.014969 2393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.029852 kubelet[2393]: E1123 22:55:32.029811 2393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.030093 kubelet[2393]: I1123 22:55:32.030023 2393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.042176 kubelet[2393]: E1123 22:55:32.042131 2393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-2-3-c3120372ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.085709 kubelet[2393]: I1123 22:55:32.085564 2393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.090856 kubelet[2393]: E1123 22:55:32.089468 2393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-2-3-c3120372ad\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:32.159295 kubelet[2393]: I1123 22:55:32.159226 2393 apiserver.go:52] "Watching apiserver" Nov 23 22:55:32.180450 kubelet[2393]: I1123 22:55:32.180406 2393 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:55:32.876745 kubelet[2393]: I1123 22:55:32.876709 2393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:34.584338 systemd[1]: Reload requested from client PID 2669 ('systemctl') (unit session-7.scope)... Nov 23 22:55:34.584360 systemd[1]: Reloading... Nov 23 22:55:34.689737 zram_generator::config[2716]: No configuration found. Nov 23 22:55:34.824460 kubelet[2393]: I1123 22:55:34.823991 2393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:34.901295 systemd[1]: Reloading finished in 316 ms. Nov 23 22:55:34.940509 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:34.956434 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:55:34.956831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:34.956915 systemd[1]: kubelet.service: Consumed 1.308s CPU time, 125.8M memory peak. Nov 23 22:55:34.959748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:35.145794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:35.160594 (kubelet)[2757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:55:35.228653 kubelet[2757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:35.228653 kubelet[2757]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:55:35.228653 kubelet[2757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:35.228653 kubelet[2757]: I1123 22:55:35.227757 2757 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:55:35.239636 kubelet[2757]: I1123 22:55:35.239582 2757 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 23 22:55:35.239822 kubelet[2757]: I1123 22:55:35.239810 2757 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:55:35.240171 kubelet[2757]: I1123 22:55:35.240150 2757 server.go:956] "Client rotation is on, will bootstrap in background" Nov 23 22:55:35.242202 kubelet[2757]: I1123 22:55:35.242164 2757 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 23 22:55:35.245862 kubelet[2757]: I1123 22:55:35.245829 2757 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:55:35.252961 kubelet[2757]: I1123 22:55:35.252899 2757 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:55:35.260719 kubelet[2757]: I1123 22:55:35.260541 2757 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:55:35.261306 kubelet[2757]: I1123 22:55:35.260818 2757 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:55:35.261306 kubelet[2757]: I1123 22:55:35.260855 2757 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-2-3-c3120372ad","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:55:35.261306 kubelet[2757]: I1123 22:55:35.261097 2757 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:55:35.261306 kubelet[2757]: I1123 22:55:35.261106 2757 container_manager_linux.go:303] "Creating device plugin manager" Nov 23 22:55:35.261306 kubelet[2757]: I1123 22:55:35.261163 2757 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:35.262233 kubelet[2757]: I1123 22:55:35.261382 2757 kubelet.go:480] "Attempting to sync node with API server" Nov 23 22:55:35.262233 kubelet[2757]: I1123 22:55:35.261398 2757 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:55:35.262233 kubelet[2757]: I1123 22:55:35.261421 2757 kubelet.go:386] "Adding apiserver pod source" Nov 23 22:55:35.262233 kubelet[2757]: I1123 22:55:35.261436 2757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:55:35.267279 kubelet[2757]: I1123 22:55:35.267223 2757 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:55:35.268037 kubelet[2757]: I1123 22:55:35.268016 2757 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 23 22:55:35.271850 kubelet[2757]: I1123 22:55:35.271818 2757 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:55:35.272003 kubelet[2757]: I1123 22:55:35.271865 2757 server.go:1289] "Started kubelet" Nov 23 22:55:35.275336 kubelet[2757]: I1123 22:55:35.275274 2757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:55:35.276273 kubelet[2757]: I1123 22:55:35.275928 2757 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:55:35.276273 kubelet[2757]: I1123 22:55:35.275991 2757 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:55:35.276772 kubelet[2757]: I1123 22:55:35.276741 2757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:55:35.277507 kubelet[2757]: I1123 22:55:35.277488 2757 server.go:317] "Adding debug handlers to kubelet server" Nov 23 22:55:35.291964 kubelet[2757]: I1123 22:55:35.291922 2757 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:55:35.298576 kubelet[2757]: I1123 22:55:35.298537 2757 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:55:35.299881 kubelet[2757]: E1123 22:55:35.299851 2757 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-2-3-c3120372ad\" not found" Nov 23 22:55:35.300498 kubelet[2757]: I1123 22:55:35.300472 2757 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:55:35.300636 kubelet[2757]: I1123 22:55:35.300611 2757 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:55:35.302708 kubelet[2757]: I1123 22:55:35.301360 2757 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 23 22:55:35.304498 kubelet[2757]: I1123 22:55:35.304461 2757 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:55:35.316141 kubelet[2757]: E1123 22:55:35.316111 2757 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:55:35.318340 kubelet[2757]: I1123 22:55:35.317079 2757 factory.go:223] Registration of the containerd container factory successfully Nov 23 22:55:35.318340 kubelet[2757]: I1123 22:55:35.317100 2757 factory.go:223] Registration of the systemd container factory successfully Nov 23 22:55:35.321912 kubelet[2757]: I1123 22:55:35.321868 2757 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 23 22:55:35.321912 kubelet[2757]: I1123 22:55:35.321903 2757 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 23 22:55:35.322051 kubelet[2757]: I1123 22:55:35.321925 2757 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:55:35.322051 kubelet[2757]: I1123 22:55:35.321932 2757 kubelet.go:2436] "Starting kubelet main sync loop" Nov 23 22:55:35.322051 kubelet[2757]: E1123 22:55:35.321972 2757 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:55:35.382601 kubelet[2757]: I1123 22:55:35.382552 2757 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:55:35.382782 kubelet[2757]: I1123 22:55:35.382766 2757 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:55:35.382851 kubelet[2757]: I1123 22:55:35.382843 2757 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:35.383033 kubelet[2757]: I1123 22:55:35.383018 2757 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 22:55:35.383107 kubelet[2757]: I1123 22:55:35.383084 2757 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 22:55:35.383157 kubelet[2757]: I1123 22:55:35.383149 2757 policy_none.go:49] "None policy: Start" Nov 23 22:55:35.383229 kubelet[2757]: I1123 22:55:35.383202 2757 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:55:35.383284 kubelet[2757]: I1123 22:55:35.383276 2757 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:55:35.383453 kubelet[2757]: I1123 22:55:35.383439 2757 state_mem.go:75] "Updated machine memory state" Nov 23 22:55:35.389827 kubelet[2757]: E1123 22:55:35.389791 2757 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 23 22:55:35.391640 kubelet[2757]: I1123 22:55:35.391142 2757 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:55:35.391640 kubelet[2757]: I1123 22:55:35.391165 2757 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:55:35.391640 kubelet[2757]: I1123 22:55:35.391474 2757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:55:35.394245 kubelet[2757]: E1123 22:55:35.393434 2757 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:55:35.424270 kubelet[2757]: I1123 22:55:35.424111 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.426836 kubelet[2757]: I1123 22:55:35.425407 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.428292 kubelet[2757]: I1123 22:55:35.425665 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.436707 kubelet[2757]: E1123 22:55:35.436669 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" already exists" pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.437143 kubelet[2757]: E1123 22:55:35.437077 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" already exists" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.495817 kubelet[2757]: I1123 22:55:35.495129 2757 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.504692 kubelet[2757]: I1123 22:55:35.504649 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa94ac08355d39971268696a7fb9edf4-ca-certs\") pod \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" (UID: \"fa94ac08355d39971268696a7fb9edf4\") " pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.504886 kubelet[2757]: I1123 22:55:35.504871 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa94ac08355d39971268696a7fb9edf4-k8s-certs\") pod \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" (UID: \"fa94ac08355d39971268696a7fb9edf4\") " pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.505311 kubelet[2757]: I1123 22:55:35.505288 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-ca-certs\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.505589 kubelet[2757]: I1123 22:55:35.505566 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.506097 kubelet[2757]: I1123 22:55:35.505774 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.506097 kubelet[2757]: I1123 22:55:35.505804 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa94ac08355d39971268696a7fb9edf4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" (UID: \"fa94ac08355d39971268696a7fb9edf4\") " pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.506097 kubelet[2757]: I1123 22:55:35.505855 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.506097 kubelet[2757]: I1123 22:55:35.505876 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f920499c5cb752701c17121d0287295-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-2-3-c3120372ad\" (UID: \"5f920499c5cb752701c17121d0287295\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.506097 kubelet[2757]: I1123 22:55:35.505906 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6c2e338ead85c56512ef79dc49688ee-kubeconfig\") pod \"kube-scheduler-ci-4459-1-2-3-c3120372ad\" (UID: \"b6c2e338ead85c56512ef79dc49688ee\") " pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.507854 kubelet[2757]: I1123 22:55:35.507699 2757 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:35.507854 kubelet[2757]: I1123 22:55:35.507795 2757 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-2-3-c3120372ad" Nov 23 22:55:36.271002 kubelet[2757]: I1123 22:55:36.270745 2757 apiserver.go:52] "Watching apiserver" Nov 23 22:55:36.300645 kubelet[2757]: I1123 22:55:36.300589 2757 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:55:36.360654 kubelet[2757]: I1123 22:55:36.359698 2757 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:36.368618 kubelet[2757]: E1123 22:55:36.368561 2757 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-2-3-c3120372ad\" already exists" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" Nov 23 22:55:36.419263 kubelet[2757]: I1123 22:55:36.419042 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-2-3-c3120372ad" podStartSLOduration=2.419019615 podStartE2EDuration="2.419019615s" podCreationTimestamp="2025-11-23 22:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:36.397065194 +0000 UTC m=+1.226019622" watchObservedRunningTime="2025-11-23 22:55:36.419019615 +0000 UTC m=+1.247974043" Nov 23 22:55:36.452479 kubelet[2757]: I1123 22:55:36.452413 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-2-3-c3120372ad" podStartSLOduration=4.452395615 podStartE2EDuration="4.452395615s" podCreationTimestamp="2025-11-23 22:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:36.447180134 +0000 UTC m=+1.276134562" watchObservedRunningTime="2025-11-23 22:55:36.452395615 +0000 UTC m=+1.281350043" Nov 23 22:55:36.452725 kubelet[2757]: I1123 22:55:36.452499 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-2-3-c3120372ad" podStartSLOduration=1.452494537 podStartE2EDuration="1.452494537s" podCreationTimestamp="2025-11-23 22:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:36.423533486 +0000 UTC m=+1.252487914" watchObservedRunningTime="2025-11-23 22:55:36.452494537 +0000 UTC m=+1.281448965" Nov 23 22:55:41.379908 kubelet[2757]: I1123 22:55:41.379674 2757 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 22:55:41.380906 containerd[1533]: time="2025-11-23T22:55:41.380751298Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 22:55:41.381391 kubelet[2757]: I1123 22:55:41.381188 2757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 22:55:42.256788 systemd[1]: Created slice kubepods-besteffort-poda688c4c6_3398_4bd7_b3ff_96c583647093.slice - libcontainer container kubepods-besteffort-poda688c4c6_3398_4bd7_b3ff_96c583647093.slice. Nov 23 22:55:42.352957 kubelet[2757]: I1123 22:55:42.352902 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgl6j\" (UniqueName: \"kubernetes.io/projected/a688c4c6-3398-4bd7-b3ff-96c583647093-kube-api-access-tgl6j\") pod \"kube-proxy-fgpfs\" (UID: \"a688c4c6-3398-4bd7-b3ff-96c583647093\") " pod="kube-system/kube-proxy-fgpfs" Nov 23 22:55:42.352957 kubelet[2757]: I1123 22:55:42.352962 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a688c4c6-3398-4bd7-b3ff-96c583647093-kube-proxy\") pod \"kube-proxy-fgpfs\" (UID: \"a688c4c6-3398-4bd7-b3ff-96c583647093\") " pod="kube-system/kube-proxy-fgpfs" Nov 23 22:55:42.353172 kubelet[2757]: I1123 22:55:42.352986 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a688c4c6-3398-4bd7-b3ff-96c583647093-xtables-lock\") pod \"kube-proxy-fgpfs\" (UID: \"a688c4c6-3398-4bd7-b3ff-96c583647093\") " pod="kube-system/kube-proxy-fgpfs" Nov 23 22:55:42.353172 kubelet[2757]: I1123 22:55:42.353004 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a688c4c6-3398-4bd7-b3ff-96c583647093-lib-modules\") pod \"kube-proxy-fgpfs\" (UID: \"a688c4c6-3398-4bd7-b3ff-96c583647093\") " pod="kube-system/kube-proxy-fgpfs" Nov 23 22:55:42.422150 systemd[1]: Created slice kubepods-besteffort-pod71da6579_868c_473d_9f16_6f72d5165d9d.slice - libcontainer container kubepods-besteffort-pod71da6579_868c_473d_9f16_6f72d5165d9d.slice. Nov 23 22:55:42.454157 kubelet[2757]: I1123 22:55:42.453791 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71da6579-868c-473d-9f16-6f72d5165d9d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-n45fn\" (UID: \"71da6579-868c-473d-9f16-6f72d5165d9d\") " pod="tigera-operator/tigera-operator-7dcd859c48-n45fn" Nov 23 22:55:42.454157 kubelet[2757]: I1123 22:55:42.453873 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wmn\" (UniqueName: \"kubernetes.io/projected/71da6579-868c-473d-9f16-6f72d5165d9d-kube-api-access-d4wmn\") pod \"tigera-operator-7dcd859c48-n45fn\" (UID: \"71da6579-868c-473d-9f16-6f72d5165d9d\") " pod="tigera-operator/tigera-operator-7dcd859c48-n45fn" Nov 23 22:55:42.570714 containerd[1533]: time="2025-11-23T22:55:42.569953768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgpfs,Uid:a688c4c6-3398-4bd7-b3ff-96c583647093,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:42.593303 containerd[1533]: time="2025-11-23T22:55:42.593098723Z" level=info msg="connecting to shim 36b41ec8d8117638ed0e1554357a820b8860954445d8f6d45eff8467054c9dbb" address="unix:///run/containerd/s/689c537a9cdb9485b156e990eafecf091a0b4db91004791dfc7de810fd33d3d5" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:42.624171 systemd[1]: Started cri-containerd-36b41ec8d8117638ed0e1554357a820b8860954445d8f6d45eff8467054c9dbb.scope - libcontainer container 36b41ec8d8117638ed0e1554357a820b8860954445d8f6d45eff8467054c9dbb. Nov 23 22:55:42.658911 containerd[1533]: time="2025-11-23T22:55:42.658865218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgpfs,Uid:a688c4c6-3398-4bd7-b3ff-96c583647093,Namespace:kube-system,Attempt:0,} returns sandbox id \"36b41ec8d8117638ed0e1554357a820b8860954445d8f6d45eff8467054c9dbb\"" Nov 23 22:55:42.666522 containerd[1533]: time="2025-11-23T22:55:42.666451041Z" level=info msg="CreateContainer within sandbox \"36b41ec8d8117638ed0e1554357a820b8860954445d8f6d45eff8467054c9dbb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 22:55:42.682161 containerd[1533]: time="2025-11-23T22:55:42.682085493Z" level=info msg="Container 0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:42.700712 containerd[1533]: time="2025-11-23T22:55:42.700460783Z" level=info msg="CreateContainer within sandbox \"36b41ec8d8117638ed0e1554357a820b8860954445d8f6d45eff8467054c9dbb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6\"" Nov 23 22:55:42.701145 containerd[1533]: time="2025-11-23T22:55:42.701115392Z" level=info msg="StartContainer for \"0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6\"" Nov 23 22:55:42.703009 containerd[1533]: time="2025-11-23T22:55:42.702970578Z" level=info msg="connecting to shim 0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6" address="unix:///run/containerd/s/689c537a9cdb9485b156e990eafecf091a0b4db91004791dfc7de810fd33d3d5" protocol=ttrpc version=3 Nov 23 22:55:42.726527 containerd[1533]: time="2025-11-23T22:55:42.726472257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n45fn,Uid:71da6579-868c-473d-9f16-6f72d5165d9d,Namespace:tigera-operator,Attempt:0,}" Nov 23 22:55:42.729406 systemd[1]: Started cri-containerd-0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6.scope - libcontainer container 0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6. Nov 23 22:55:42.750717 containerd[1533]: time="2025-11-23T22:55:42.750659466Z" level=info msg="connecting to shim 8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510" address="unix:///run/containerd/s/529c1253dcbe09ace2825b04f3305e78f0a69a6632ca02cb0521e112b584c75e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:42.789957 systemd[1]: Started cri-containerd-8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510.scope - libcontainer container 8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510. Nov 23 22:55:42.826937 containerd[1533]: time="2025-11-23T22:55:42.826716181Z" level=info msg="StartContainer for \"0c3662210c413137c78395c1ea436be0177ba69ab54d35fbea6d098e90f2ffb6\" returns successfully" Nov 23 22:55:42.861607 containerd[1533]: time="2025-11-23T22:55:42.861517294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-n45fn,Uid:71da6579-868c-473d-9f16-6f72d5165d9d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510\"" Nov 23 22:55:42.864728 containerd[1533]: time="2025-11-23T22:55:42.864619417Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 22:55:43.759908 kubelet[2757]: I1123 22:55:43.759674 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fgpfs" podStartSLOduration=1.759655756 podStartE2EDuration="1.759655756s" podCreationTimestamp="2025-11-23 22:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:43.408023864 +0000 UTC m=+8.236978332" watchObservedRunningTime="2025-11-23 22:55:43.759655756 +0000 UTC m=+8.588610184" Nov 23 22:55:44.582963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584298652.mount: Deactivated successfully. Nov 23 22:55:45.000587 containerd[1533]: time="2025-11-23T22:55:45.000503473Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:45.002962 containerd[1533]: time="2025-11-23T22:55:45.002919304Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 22:55:45.004668 containerd[1533]: time="2025-11-23T22:55:45.004430523Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:45.007361 containerd[1533]: time="2025-11-23T22:55:45.007308360Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:45.009113 containerd[1533]: time="2025-11-23T22:55:45.008850460Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.144158323s" Nov 23 22:55:45.009113 containerd[1533]: time="2025-11-23T22:55:45.008923021Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 22:55:45.016746 containerd[1533]: time="2025-11-23T22:55:45.016688121Z" level=info msg="CreateContainer within sandbox \"8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 22:55:45.026890 containerd[1533]: time="2025-11-23T22:55:45.026265164Z" level=info msg="Container d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:45.038708 containerd[1533]: time="2025-11-23T22:55:45.038660324Z" level=info msg="CreateContainer within sandbox \"8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2\"" Nov 23 22:55:45.039766 containerd[1533]: time="2025-11-23T22:55:45.039723658Z" level=info msg="StartContainer for \"d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2\"" Nov 23 22:55:45.040896 containerd[1533]: time="2025-11-23T22:55:45.040846912Z" level=info msg="connecting to shim d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2" address="unix:///run/containerd/s/529c1253dcbe09ace2825b04f3305e78f0a69a6632ca02cb0521e112b584c75e" protocol=ttrpc version=3 Nov 23 22:55:45.070876 systemd[1]: Started cri-containerd-d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2.scope - libcontainer container d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2. Nov 23 22:55:45.110472 containerd[1533]: time="2025-11-23T22:55:45.110427208Z" level=info msg="StartContainer for \"d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2\" returns successfully" Nov 23 22:55:51.258694 sudo[1826]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:51.416561 sshd[1809]: Connection closed by 139.178.89.65 port 52042 Nov 23 22:55:51.416045 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:51.424041 systemd[1]: sshd@6-91.98.91.202:22-139.178.89.65:52042.service: Deactivated successfully. Nov 23 22:55:51.433373 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 22:55:51.433576 systemd[1]: session-7.scope: Consumed 8.525s CPU time, 222.3M memory peak. Nov 23 22:55:51.436136 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. Nov 23 22:55:51.439503 systemd-logind[1501]: Removed session 7. Nov 23 22:56:01.802943 kubelet[2757]: I1123 22:56:01.802779 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-n45fn" podStartSLOduration=17.656645371 podStartE2EDuration="19.8027552s" podCreationTimestamp="2025-11-23 22:55:42 +0000 UTC" firstStartedPulling="2025-11-23 22:55:42.864017128 +0000 UTC m=+7.692971596" lastFinishedPulling="2025-11-23 22:55:45.010126997 +0000 UTC m=+9.839081425" observedRunningTime="2025-11-23 22:55:45.417002115 +0000 UTC m=+10.245956543" watchObservedRunningTime="2025-11-23 22:56:01.8027552 +0000 UTC m=+26.631709668" Nov 23 22:56:01.817453 systemd[1]: Created slice kubepods-besteffort-podf9cfd02a_c53c_423f_83d8_89fd45642cc6.slice - libcontainer container kubepods-besteffort-podf9cfd02a_c53c_423f_83d8_89fd45642cc6.slice. Nov 23 22:56:01.894689 kubelet[2757]: I1123 22:56:01.894460 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f9cfd02a-c53c-423f-83d8-89fd45642cc6-typha-certs\") pod \"calico-typha-ff97b4ccb-pjn9j\" (UID: \"f9cfd02a-c53c-423f-83d8-89fd45642cc6\") " pod="calico-system/calico-typha-ff97b4ccb-pjn9j" Nov 23 22:56:01.894689 kubelet[2757]: I1123 22:56:01.894531 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx9nz\" (UniqueName: \"kubernetes.io/projected/f9cfd02a-c53c-423f-83d8-89fd45642cc6-kube-api-access-hx9nz\") pod \"calico-typha-ff97b4ccb-pjn9j\" (UID: \"f9cfd02a-c53c-423f-83d8-89fd45642cc6\") " pod="calico-system/calico-typha-ff97b4ccb-pjn9j" Nov 23 22:56:01.894689 kubelet[2757]: I1123 22:56:01.894565 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9cfd02a-c53c-423f-83d8-89fd45642cc6-tigera-ca-bundle\") pod \"calico-typha-ff97b4ccb-pjn9j\" (UID: \"f9cfd02a-c53c-423f-83d8-89fd45642cc6\") " pod="calico-system/calico-typha-ff97b4ccb-pjn9j" Nov 23 22:56:01.939263 systemd[1]: Created slice kubepods-besteffort-pod32cbdab4_ab12_407f_b0da_471787a7c407.slice - libcontainer container kubepods-besteffort-pod32cbdab4_ab12_407f_b0da_471787a7c407.slice. Nov 23 22:56:01.995919 kubelet[2757]: I1123 22:56:01.995827 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-cni-bin-dir\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.996602 kubelet[2757]: I1123 22:56:01.996330 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32cbdab4-ab12-407f-b0da-471787a7c407-node-certs\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.996602 kubelet[2757]: I1123 22:56:01.996435 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32cbdab4-ab12-407f-b0da-471787a7c407-tigera-ca-bundle\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.996602 kubelet[2757]: I1123 22:56:01.996483 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjzj\" (UniqueName: \"kubernetes.io/projected/32cbdab4-ab12-407f-b0da-471787a7c407-kube-api-access-7fjzj\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.996602 kubelet[2757]: I1123 22:56:01.996557 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-cni-net-dir\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.997342 kubelet[2757]: I1123 22:56:01.996964 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-policysync\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.997342 kubelet[2757]: I1123 22:56:01.997069 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-var-lib-calico\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.997492 kubelet[2757]: I1123 22:56:01.997473 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-var-run-calico\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.997595 kubelet[2757]: I1123 22:56:01.997581 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-flexvol-driver-host\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.997689 kubelet[2757]: I1123 22:56:01.997676 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-lib-modules\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.997826 kubelet[2757]: I1123 22:56:01.997774 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-cni-log-dir\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:01.998663 kubelet[2757]: I1123 22:56:01.997799 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32cbdab4-ab12-407f-b0da-471787a7c407-xtables-lock\") pod \"calico-node-8brhz\" (UID: \"32cbdab4-ab12-407f-b0da-471787a7c407\") " pod="calico-system/calico-node-8brhz" Nov 23 22:56:02.057023 kubelet[2757]: E1123 22:56:02.056512 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:02.100655 kubelet[2757]: I1123 22:56:02.099076 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21480ae4-8b64-4bd3-93f8-a08b2cf68bf0-socket-dir\") pod \"csi-node-driver-h75fx\" (UID: \"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0\") " pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:02.100655 kubelet[2757]: I1123 22:56:02.099220 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pl8m\" (UniqueName: \"kubernetes.io/projected/21480ae4-8b64-4bd3-93f8-a08b2cf68bf0-kube-api-access-2pl8m\") pod \"csi-node-driver-h75fx\" (UID: \"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0\") " pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:02.100655 kubelet[2757]: I1123 22:56:02.099280 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21480ae4-8b64-4bd3-93f8-a08b2cf68bf0-registration-dir\") pod \"csi-node-driver-h75fx\" (UID: \"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0\") " pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:02.100655 kubelet[2757]: I1123 22:56:02.099301 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/21480ae4-8b64-4bd3-93f8-a08b2cf68bf0-varrun\") pod \"csi-node-driver-h75fx\" (UID: \"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0\") " pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:02.100655 kubelet[2757]: I1123 22:56:02.099329 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21480ae4-8b64-4bd3-93f8-a08b2cf68bf0-kubelet-dir\") pod \"csi-node-driver-h75fx\" (UID: \"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0\") " pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:02.102555 kubelet[2757]: E1123 22:56:02.102418 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.103001 kubelet[2757]: W1123 22:56:02.102950 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.103110 kubelet[2757]: E1123 22:56:02.103097 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.103895 kubelet[2757]: E1123 22:56:02.103795 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.103895 kubelet[2757]: W1123 22:56:02.103823 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.103895 kubelet[2757]: E1123 22:56:02.103839 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.105038 kubelet[2757]: E1123 22:56:02.104928 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.105038 kubelet[2757]: W1123 22:56:02.104952 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.105038 kubelet[2757]: E1123 22:56:02.104972 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.105340 kubelet[2757]: E1123 22:56:02.105303 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.105340 kubelet[2757]: W1123 22:56:02.105316 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.105493 kubelet[2757]: E1123 22:56:02.105326 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.109392 kubelet[2757]: E1123 22:56:02.109370 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.109642 kubelet[2757]: W1123 22:56:02.109531 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.109642 kubelet[2757]: E1123 22:56:02.109555 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.114807 kubelet[2757]: E1123 22:56:02.113289 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.114807 kubelet[2757]: W1123 22:56:02.113316 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.114807 kubelet[2757]: E1123 22:56:02.113565 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.115208 kubelet[2757]: E1123 22:56:02.115052 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.115208 kubelet[2757]: W1123 22:56:02.115070 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.115208 kubelet[2757]: E1123 22:56:02.115102 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.116702 kubelet[2757]: E1123 22:56:02.116660 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.116979 kubelet[2757]: W1123 22:56:02.116960 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.117061 kubelet[2757]: E1123 22:56:02.117049 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.117543 kubelet[2757]: E1123 22:56:02.117530 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.117646 kubelet[2757]: W1123 22:56:02.117633 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.117742 kubelet[2757]: E1123 22:56:02.117719 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.118076 kubelet[2757]: E1123 22:56:02.118059 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.118773 kubelet[2757]: W1123 22:56:02.118103 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.118773 kubelet[2757]: E1123 22:56:02.118116 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.119404 kubelet[2757]: E1123 22:56:02.119284 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.119404 kubelet[2757]: W1123 22:56:02.119299 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.119404 kubelet[2757]: E1123 22:56:02.119318 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.119691 kubelet[2757]: E1123 22:56:02.119658 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.119691 kubelet[2757]: W1123 22:56:02.119670 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.119691 kubelet[2757]: E1123 22:56:02.119680 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.120201 kubelet[2757]: E1123 22:56:02.120177 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.120431 kubelet[2757]: W1123 22:56:02.120190 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.120431 kubelet[2757]: E1123 22:56:02.120283 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.120813 kubelet[2757]: E1123 22:56:02.120776 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.121056 kubelet[2757]: W1123 22:56:02.120790 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.121056 kubelet[2757]: E1123 22:56:02.121000 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.124886 containerd[1533]: time="2025-11-23T22:56:02.123615747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff97b4ccb-pjn9j,Uid:f9cfd02a-c53c-423f-83d8-89fd45642cc6,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:02.157666 kubelet[2757]: E1123 22:56:02.157444 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.157666 kubelet[2757]: W1123 22:56:02.157477 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.157666 kubelet[2757]: E1123 22:56:02.157500 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.164829 containerd[1533]: time="2025-11-23T22:56:02.164689798Z" level=info msg="connecting to shim 79c9bae5ec8885ae6d82a7a12fac6bf50a5ef6d12077885902380231e350a43f" address="unix:///run/containerd/s/62e9a1774a3ac919a83c02f0f6824781b3546c83cdec2f9dd5b564eed085ef5e" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:02.196019 systemd[1]: Started cri-containerd-79c9bae5ec8885ae6d82a7a12fac6bf50a5ef6d12077885902380231e350a43f.scope - libcontainer container 79c9bae5ec8885ae6d82a7a12fac6bf50a5ef6d12077885902380231e350a43f. Nov 23 22:56:02.201181 kubelet[2757]: E1123 22:56:02.201124 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.201525 kubelet[2757]: W1123 22:56:02.201296 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.201525 kubelet[2757]: E1123 22:56:02.201376 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.202024 kubelet[2757]: E1123 22:56:02.201974 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.202527 kubelet[2757]: W1123 22:56:02.202004 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.202662 kubelet[2757]: E1123 22:56:02.202512 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.203243 kubelet[2757]: E1123 22:56:02.203195 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.203331 kubelet[2757]: W1123 22:56:02.203250 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.203331 kubelet[2757]: E1123 22:56:02.203274 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.203731 kubelet[2757]: E1123 22:56:02.203712 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.204596 kubelet[2757]: W1123 22:56:02.204555 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.204596 kubelet[2757]: E1123 22:56:02.204590 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.204999 kubelet[2757]: E1123 22:56:02.204979 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.204999 kubelet[2757]: W1123 22:56:02.204995 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.205082 kubelet[2757]: E1123 22:56:02.205044 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.205705 kubelet[2757]: E1123 22:56:02.205658 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.205705 kubelet[2757]: W1123 22:56:02.205679 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.205705 kubelet[2757]: E1123 22:56:02.205692 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.206436 kubelet[2757]: E1123 22:56:02.206413 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.206436 kubelet[2757]: W1123 22:56:02.206432 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.206436 kubelet[2757]: E1123 22:56:02.206446 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.206709 kubelet[2757]: E1123 22:56:02.206692 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.206709 kubelet[2757]: W1123 22:56:02.206706 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.206868 kubelet[2757]: E1123 22:56:02.206717 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.207015 kubelet[2757]: E1123 22:56:02.206997 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.207055 kubelet[2757]: W1123 22:56:02.207014 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.207055 kubelet[2757]: E1123 22:56:02.207025 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.208228 kubelet[2757]: E1123 22:56:02.208202 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.208228 kubelet[2757]: W1123 22:56:02.208223 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.208361 kubelet[2757]: E1123 22:56:02.208237 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.208439 kubelet[2757]: E1123 22:56:02.208421 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.208439 kubelet[2757]: W1123 22:56:02.208435 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.208528 kubelet[2757]: E1123 22:56:02.208444 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.208715 kubelet[2757]: E1123 22:56:02.208699 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.208715 kubelet[2757]: W1123 22:56:02.208714 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.208810 kubelet[2757]: E1123 22:56:02.208725 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.208915 kubelet[2757]: E1123 22:56:02.208894 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.208915 kubelet[2757]: W1123 22:56:02.208907 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.208915 kubelet[2757]: E1123 22:56:02.208917 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.209065 kubelet[2757]: E1123 22:56:02.209053 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.209065 kubelet[2757]: W1123 22:56:02.209063 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.209216 kubelet[2757]: E1123 22:56:02.209071 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.209260 kubelet[2757]: E1123 22:56:02.209234 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.209260 kubelet[2757]: W1123 22:56:02.209242 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.209329 kubelet[2757]: E1123 22:56:02.209287 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.209605 kubelet[2757]: E1123 22:56:02.209583 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.209764 kubelet[2757]: W1123 22:56:02.209606 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.209764 kubelet[2757]: E1123 22:56:02.209619 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.209887 kubelet[2757]: E1123 22:56:02.209860 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.209887 kubelet[2757]: W1123 22:56:02.209879 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.209961 kubelet[2757]: E1123 22:56:02.209891 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.210141 kubelet[2757]: E1123 22:56:02.210123 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.210141 kubelet[2757]: W1123 22:56:02.210138 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.210263 kubelet[2757]: E1123 22:56:02.210186 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.210425 kubelet[2757]: E1123 22:56:02.210411 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.210466 kubelet[2757]: W1123 22:56:02.210425 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.210466 kubelet[2757]: E1123 22:56:02.210451 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.210777 kubelet[2757]: E1123 22:56:02.210759 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.210777 kubelet[2757]: W1123 22:56:02.210775 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.210885 kubelet[2757]: E1123 22:56:02.210785 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.212233 kubelet[2757]: E1123 22:56:02.212211 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.212233 kubelet[2757]: W1123 22:56:02.212230 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.212233 kubelet[2757]: E1123 22:56:02.212244 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.212554 kubelet[2757]: E1123 22:56:02.212537 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.212554 kubelet[2757]: W1123 22:56:02.212551 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.212705 kubelet[2757]: E1123 22:56:02.212561 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.212789 kubelet[2757]: E1123 22:56:02.212773 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.212789 kubelet[2757]: W1123 22:56:02.212788 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.212956 kubelet[2757]: E1123 22:56:02.212798 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.213887 kubelet[2757]: E1123 22:56:02.213868 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.213998 kubelet[2757]: W1123 22:56:02.213984 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.214059 kubelet[2757]: E1123 22:56:02.214048 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.214638 kubelet[2757]: E1123 22:56:02.214492 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.214638 kubelet[2757]: W1123 22:56:02.214508 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.214638 kubelet[2757]: E1123 22:56:02.214519 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.233410 kubelet[2757]: E1123 22:56:02.233276 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:02.234489 kubelet[2757]: W1123 22:56:02.234344 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:02.234489 kubelet[2757]: E1123 22:56:02.234379 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:02.244653 containerd[1533]: time="2025-11-23T22:56:02.244361449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8brhz,Uid:32cbdab4-ab12-407f-b0da-471787a7c407,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:02.256465 containerd[1533]: time="2025-11-23T22:56:02.256419647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff97b4ccb-pjn9j,Uid:f9cfd02a-c53c-423f-83d8-89fd45642cc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"79c9bae5ec8885ae6d82a7a12fac6bf50a5ef6d12077885902380231e350a43f\"" Nov 23 22:56:02.258916 containerd[1533]: time="2025-11-23T22:56:02.258882536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 22:56:02.272210 containerd[1533]: time="2025-11-23T22:56:02.272005675Z" level=info msg="connecting to shim c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3" address="unix:///run/containerd/s/1d5bb6b4f4f2c23bd45abbeeab14c4bd3a0ab2eda132e54085c3c2ed30959392" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:02.299948 systemd[1]: Started cri-containerd-c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3.scope - libcontainer container c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3. Nov 23 22:56:02.336949 containerd[1533]: time="2025-11-23T22:56:02.336819313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8brhz,Uid:32cbdab4-ab12-407f-b0da-471787a7c407,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\"" Nov 23 22:56:03.322789 kubelet[2757]: E1123 22:56:03.322371 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:03.684039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709859015.mount: Deactivated successfully. Nov 23 22:56:04.346372 containerd[1533]: time="2025-11-23T22:56:04.345684463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:04.346814 containerd[1533]: time="2025-11-23T22:56:04.346784804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 23 22:56:04.347701 containerd[1533]: time="2025-11-23T22:56:04.347671221Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:04.351093 containerd[1533]: time="2025-11-23T22:56:04.351041245Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:04.351707 containerd[1533]: time="2025-11-23T22:56:04.351669297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.09270152s" Nov 23 22:56:04.351779 containerd[1533]: time="2025-11-23T22:56:04.351709018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 22:56:04.353991 containerd[1533]: time="2025-11-23T22:56:04.353947741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 22:56:04.372500 containerd[1533]: time="2025-11-23T22:56:04.372456336Z" level=info msg="CreateContainer within sandbox \"79c9bae5ec8885ae6d82a7a12fac6bf50a5ef6d12077885902380231e350a43f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 22:56:04.386741 containerd[1533]: time="2025-11-23T22:56:04.384979656Z" level=info msg="Container a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:04.392483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4235604747.mount: Deactivated successfully. Nov 23 22:56:04.402089 containerd[1533]: time="2025-11-23T22:56:04.402009342Z" level=info msg="CreateContainer within sandbox \"79c9bae5ec8885ae6d82a7a12fac6bf50a5ef6d12077885902380231e350a43f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd\"" Nov 23 22:56:04.403751 containerd[1533]: time="2025-11-23T22:56:04.402965560Z" level=info msg="StartContainer for \"a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd\"" Nov 23 22:56:04.405315 containerd[1533]: time="2025-11-23T22:56:04.405135762Z" level=info msg="connecting to shim a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd" address="unix:///run/containerd/s/62e9a1774a3ac919a83c02f0f6824781b3546c83cdec2f9dd5b564eed085ef5e" protocol=ttrpc version=3 Nov 23 22:56:04.432835 systemd[1]: Started cri-containerd-a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd.scope - libcontainer container a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd. Nov 23 22:56:04.479059 containerd[1533]: time="2025-11-23T22:56:04.479020218Z" level=info msg="StartContainer for \"a455fe181af9ca282dfe3ca9706056158839e1196dc4b6a60eeb228d1f40c8bd\" returns successfully" Nov 23 22:56:05.327305 kubelet[2757]: E1123 22:56:05.326393 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:05.474128 kubelet[2757]: I1123 22:56:05.473245 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ff97b4ccb-pjn9j" podStartSLOduration=2.378724432 podStartE2EDuration="4.473225827s" podCreationTimestamp="2025-11-23 22:56:01 +0000 UTC" firstStartedPulling="2025-11-23 22:56:02.258545849 +0000 UTC m=+27.087500277" lastFinishedPulling="2025-11-23 22:56:04.353047244 +0000 UTC m=+29.182001672" observedRunningTime="2025-11-23 22:56:05.47178052 +0000 UTC m=+30.300734988" watchObservedRunningTime="2025-11-23 22:56:05.473225827 +0000 UTC m=+30.302180255" Nov 23 22:56:05.493971 kubelet[2757]: E1123 22:56:05.493924 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.493971 kubelet[2757]: W1123 22:56:05.493967 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.494174 kubelet[2757]: E1123 22:56:05.494014 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.494455 kubelet[2757]: E1123 22:56:05.494433 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.494535 kubelet[2757]: W1123 22:56:05.494459 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.494574 kubelet[2757]: E1123 22:56:05.494543 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.494857 kubelet[2757]: E1123 22:56:05.494837 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.494914 kubelet[2757]: W1123 22:56:05.494860 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.494914 kubelet[2757]: E1123 22:56:05.494877 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.496944 kubelet[2757]: E1123 22:56:05.496914 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.496944 kubelet[2757]: W1123 22:56:05.496939 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.497121 kubelet[2757]: E1123 22:56:05.496957 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.497377 kubelet[2757]: E1123 22:56:05.497249 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.497377 kubelet[2757]: W1123 22:56:05.497261 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.497377 kubelet[2757]: E1123 22:56:05.497271 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.497716 kubelet[2757]: E1123 22:56:05.497698 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.497772 kubelet[2757]: W1123 22:56:05.497718 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.497772 kubelet[2757]: E1123 22:56:05.497736 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.497948 kubelet[2757]: E1123 22:56:05.497935 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.497992 kubelet[2757]: W1123 22:56:05.497950 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.497992 kubelet[2757]: E1123 22:56:05.497963 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.498178 kubelet[2757]: E1123 22:56:05.498151 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.498208 kubelet[2757]: W1123 22:56:05.498192 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.498235 kubelet[2757]: E1123 22:56:05.498210 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.498423 kubelet[2757]: E1123 22:56:05.498410 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.498467 kubelet[2757]: W1123 22:56:05.498425 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.498467 kubelet[2757]: E1123 22:56:05.498438 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.498615 kubelet[2757]: E1123 22:56:05.498602 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.498671 kubelet[2757]: W1123 22:56:05.498617 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.498706 kubelet[2757]: E1123 22:56:05.498676 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.498850 kubelet[2757]: E1123 22:56:05.498837 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.498888 kubelet[2757]: W1123 22:56:05.498852 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.498888 kubelet[2757]: E1123 22:56:05.498863 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.499040 kubelet[2757]: E1123 22:56:05.499029 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.499071 kubelet[2757]: W1123 22:56:05.499042 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.499071 kubelet[2757]: E1123 22:56:05.499053 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.499349 kubelet[2757]: E1123 22:56:05.499328 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.499349 kubelet[2757]: W1123 22:56:05.499347 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.499443 kubelet[2757]: E1123 22:56:05.499363 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.499597 kubelet[2757]: E1123 22:56:05.499584 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.499649 kubelet[2757]: W1123 22:56:05.499600 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.499649 kubelet[2757]: E1123 22:56:05.499612 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.499850 kubelet[2757]: E1123 22:56:05.499836 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.499894 kubelet[2757]: W1123 22:56:05.499852 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.499894 kubelet[2757]: E1123 22:56:05.499865 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.537757 kubelet[2757]: E1123 22:56:05.537696 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.538116 kubelet[2757]: W1123 22:56:05.537831 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.538116 kubelet[2757]: E1123 22:56:05.537865 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.538720 kubelet[2757]: E1123 22:56:05.538477 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.538720 kubelet[2757]: W1123 22:56:05.538502 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.538720 kubelet[2757]: E1123 22:56:05.538525 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.539156 kubelet[2757]: E1123 22:56:05.539135 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.539505 kubelet[2757]: W1123 22:56:05.539279 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.539505 kubelet[2757]: E1123 22:56:05.539308 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.540279 kubelet[2757]: E1123 22:56:05.540028 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.540279 kubelet[2757]: W1123 22:56:05.540049 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.540279 kubelet[2757]: E1123 22:56:05.540071 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.540831 kubelet[2757]: E1123 22:56:05.540807 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.541093 kubelet[2757]: W1123 22:56:05.540910 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.541093 kubelet[2757]: E1123 22:56:05.540932 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.541272 kubelet[2757]: E1123 22:56:05.541255 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.541472 kubelet[2757]: W1123 22:56:05.541333 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.541472 kubelet[2757]: E1123 22:56:05.541351 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.541611 kubelet[2757]: E1123 22:56:05.541597 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.541703 kubelet[2757]: W1123 22:56:05.541689 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.541868 kubelet[2757]: E1123 22:56:05.541757 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.541986 kubelet[2757]: E1123 22:56:05.541972 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.542234 kubelet[2757]: W1123 22:56:05.542047 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.542234 kubelet[2757]: E1123 22:56:05.542064 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.542389 kubelet[2757]: E1123 22:56:05.542374 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.542453 kubelet[2757]: W1123 22:56:05.542440 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.542511 kubelet[2757]: E1123 22:56:05.542499 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.542885 kubelet[2757]: E1123 22:56:05.542844 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.542885 kubelet[2757]: W1123 22:56:05.542873 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.542981 kubelet[2757]: E1123 22:56:05.542890 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.543064 kubelet[2757]: E1123 22:56:05.543039 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.543064 kubelet[2757]: W1123 22:56:05.543056 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.543133 kubelet[2757]: E1123 22:56:05.543069 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.543294 kubelet[2757]: E1123 22:56:05.543276 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.543294 kubelet[2757]: W1123 22:56:05.543290 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.543373 kubelet[2757]: E1123 22:56:05.543302 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.543602 kubelet[2757]: E1123 22:56:05.543547 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.543602 kubelet[2757]: W1123 22:56:05.543565 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.543602 kubelet[2757]: E1123 22:56:05.543577 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.543955 kubelet[2757]: E1123 22:56:05.543937 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.543955 kubelet[2757]: W1123 22:56:05.543953 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.544043 kubelet[2757]: E1123 22:56:05.543965 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.544159 kubelet[2757]: E1123 22:56:05.544138 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.544159 kubelet[2757]: W1123 22:56:05.544154 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.544242 kubelet[2757]: E1123 22:56:05.544181 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.544445 kubelet[2757]: E1123 22:56:05.544368 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.544476 kubelet[2757]: W1123 22:56:05.544458 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.544476 kubelet[2757]: E1123 22:56:05.544472 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.544786 kubelet[2757]: E1123 22:56:05.544772 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.544870 kubelet[2757]: W1123 22:56:05.544859 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.544924 kubelet[2757]: E1123 22:56:05.544914 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.545267 kubelet[2757]: E1123 22:56:05.545248 2757 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:05.545351 kubelet[2757]: W1123 22:56:05.545339 2757 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:05.545415 kubelet[2757]: E1123 22:56:05.545403 2757 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:05.780739 containerd[1533]: time="2025-11-23T22:56:05.780664078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:05.782468 containerd[1533]: time="2025-11-23T22:56:05.782176347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 23 22:56:05.784440 containerd[1533]: time="2025-11-23T22:56:05.784271666Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:05.788667 containerd[1533]: time="2025-11-23T22:56:05.788613388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:05.789327 containerd[1533]: time="2025-11-23T22:56:05.789293401Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.43530702s" Nov 23 22:56:05.789427 containerd[1533]: time="2025-11-23T22:56:05.789412643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 22:56:05.796491 containerd[1533]: time="2025-11-23T22:56:05.796453936Z" level=info msg="CreateContainer within sandbox \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 22:56:05.812891 containerd[1533]: time="2025-11-23T22:56:05.812840966Z" level=info msg="Container e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:05.825977 containerd[1533]: time="2025-11-23T22:56:05.825924053Z" level=info msg="CreateContainer within sandbox \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353\"" Nov 23 22:56:05.829985 containerd[1533]: time="2025-11-23T22:56:05.829939649Z" level=info msg="StartContainer for \"e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353\"" Nov 23 22:56:05.833079 containerd[1533]: time="2025-11-23T22:56:05.832990907Z" level=info msg="connecting to shim e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353" address="unix:///run/containerd/s/1d5bb6b4f4f2c23bd45abbeeab14c4bd3a0ab2eda132e54085c3c2ed30959392" protocol=ttrpc version=3 Nov 23 22:56:05.863953 systemd[1]: Started cri-containerd-e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353.scope - libcontainer container e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353. Nov 23 22:56:05.942249 containerd[1533]: time="2025-11-23T22:56:05.942201851Z" level=info msg="StartContainer for \"e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353\" returns successfully" Nov 23 22:56:05.962088 systemd[1]: cri-containerd-e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353.scope: Deactivated successfully. Nov 23 22:56:05.968843 containerd[1533]: time="2025-11-23T22:56:05.968804034Z" level=info msg="received container exit event container_id:\"e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353\" id:\"e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353\" pid:3405 exited_at:{seconds:1763938565 nanos:968288544}" Nov 23 22:56:05.995151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e06ee2e38d3b289d659dc0ba9ff11a607a2a26d75450de04e948f1bbc0712353-rootfs.mount: Deactivated successfully. Nov 23 22:56:06.459644 kubelet[2757]: I1123 22:56:06.459574 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 22:56:06.462921 containerd[1533]: time="2025-11-23T22:56:06.462881652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 22:56:07.326649 kubelet[2757]: E1123 22:56:07.324131 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:08.999698 containerd[1533]: time="2025-11-23T22:56:08.998652900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:09.000283 containerd[1533]: time="2025-11-23T22:56:08.999849282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 22:56:09.000996 containerd[1533]: time="2025-11-23T22:56:09.000950422Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:09.004108 containerd[1533]: time="2025-11-23T22:56:09.004050518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:09.005774 containerd[1533]: time="2025-11-23T22:56:09.005735988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.542788735s" Nov 23 22:56:09.005904 containerd[1533]: time="2025-11-23T22:56:09.005888670Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 22:56:09.011726 containerd[1533]: time="2025-11-23T22:56:09.011516091Z" level=info msg="CreateContainer within sandbox \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 22:56:09.025748 containerd[1533]: time="2025-11-23T22:56:09.023696909Z" level=info msg="Container 81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:09.044973 containerd[1533]: time="2025-11-23T22:56:09.044894769Z" level=info msg="CreateContainer within sandbox \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38\"" Nov 23 22:56:09.046085 containerd[1533]: time="2025-11-23T22:56:09.046062550Z" level=info msg="StartContainer for \"81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38\"" Nov 23 22:56:09.047983 containerd[1533]: time="2025-11-23T22:56:09.047906303Z" level=info msg="connecting to shim 81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38" address="unix:///run/containerd/s/1d5bb6b4f4f2c23bd45abbeeab14c4bd3a0ab2eda132e54085c3c2ed30959392" protocol=ttrpc version=3 Nov 23 22:56:09.074863 systemd[1]: Started cri-containerd-81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38.scope - libcontainer container 81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38. Nov 23 22:56:09.157023 containerd[1533]: time="2025-11-23T22:56:09.156983377Z" level=info msg="StartContainer for \"81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38\" returns successfully" Nov 23 22:56:09.325375 kubelet[2757]: E1123 22:56:09.325209 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:09.713566 systemd[1]: cri-containerd-81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38.scope: Deactivated successfully. Nov 23 22:56:09.714282 systemd[1]: cri-containerd-81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38.scope: Consumed 538ms CPU time, 185.5M memory peak, 165.9M written to disk. Nov 23 22:56:09.719678 containerd[1533]: time="2025-11-23T22:56:09.719489054Z" level=info msg="received container exit event container_id:\"81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38\" id:\"81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38\" pid:3465 exited_at:{seconds:1763938569 nanos:719233689}" Nov 23 22:56:09.721111 kubelet[2757]: I1123 22:56:09.721079 2757 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 22:56:09.766040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81ef48f537079d27e4a539167478a89ad8f36de55e09c41c818cef2cd9acdd38-rootfs.mount: Deactivated successfully. Nov 23 22:56:09.835518 systemd[1]: Created slice kubepods-burstable-podb21617f8_e4ed_43f4_8cea_63bb326da3a4.slice - libcontainer container kubepods-burstable-podb21617f8_e4ed_43f4_8cea_63bb326da3a4.slice. Nov 23 22:56:09.865484 systemd[1]: Created slice kubepods-besteffort-podf2f2beaa_e94b_428d_976f_479df6d0fa8f.slice - libcontainer container kubepods-besteffort-podf2f2beaa_e94b_428d_976f_479df6d0fa8f.slice. Nov 23 22:56:09.870786 kubelet[2757]: I1123 22:56:09.870031 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7z72\" (UniqueName: \"kubernetes.io/projected/b21617f8-e4ed-43f4-8cea-63bb326da3a4-kube-api-access-p7z72\") pod \"coredns-674b8bbfcf-dvrbp\" (UID: \"b21617f8-e4ed-43f4-8cea-63bb326da3a4\") " pod="kube-system/coredns-674b8bbfcf-dvrbp" Nov 23 22:56:09.870786 kubelet[2757]: I1123 22:56:09.870111 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b21617f8-e4ed-43f4-8cea-63bb326da3a4-config-volume\") pod \"coredns-674b8bbfcf-dvrbp\" (UID: \"b21617f8-e4ed-43f4-8cea-63bb326da3a4\") " pod="kube-system/coredns-674b8bbfcf-dvrbp" Nov 23 22:56:09.870786 kubelet[2757]: I1123 22:56:09.870144 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8685cb9-790c-467e-8fb6-e40f7f6bef3f-config-volume\") pod \"coredns-674b8bbfcf-mnrvc\" (UID: \"e8685cb9-790c-467e-8fb6-e40f7f6bef3f\") " pod="kube-system/coredns-674b8bbfcf-mnrvc" Nov 23 22:56:09.870786 kubelet[2757]: I1123 22:56:09.870215 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj7n2\" (UniqueName: \"kubernetes.io/projected/e8685cb9-790c-467e-8fb6-e40f7f6bef3f-kube-api-access-hj7n2\") pod \"coredns-674b8bbfcf-mnrvc\" (UID: \"e8685cb9-790c-467e-8fb6-e40f7f6bef3f\") " pod="kube-system/coredns-674b8bbfcf-mnrvc" Nov 23 22:56:09.889091 systemd[1]: Created slice kubepods-besteffort-podab494c3a_4812_4ee2_ad6b_4c8c2c77a5ee.slice - libcontainer container kubepods-besteffort-podab494c3a_4812_4ee2_ad6b_4c8c2c77a5ee.slice. Nov 23 22:56:09.897315 systemd[1]: Created slice kubepods-besteffort-pod22998d4f_4bc5_4628_a75e_c9b585fec59a.slice - libcontainer container kubepods-besteffort-pod22998d4f_4bc5_4628_a75e_c9b585fec59a.slice. Nov 23 22:56:09.908846 systemd[1]: Created slice kubepods-burstable-pode8685cb9_790c_467e_8fb6_e40f7f6bef3f.slice - libcontainer container kubepods-burstable-pode8685cb9_790c_467e_8fb6_e40f7f6bef3f.slice. Nov 23 22:56:09.923330 systemd[1]: Created slice kubepods-besteffort-podae58e09c_3642_4da5_a2ea_675ec846270c.slice - libcontainer container kubepods-besteffort-podae58e09c_3642_4da5_a2ea_675ec846270c.slice. Nov 23 22:56:09.933340 systemd[1]: Created slice kubepods-besteffort-pod4d9ba8ba_5ef4_48c0_a1ef_9e82a35d575b.slice - libcontainer container kubepods-besteffort-pod4d9ba8ba_5ef4_48c0_a1ef_9e82a35d575b.slice. Nov 23 22:56:09.972272 kubelet[2757]: I1123 22:56:09.971620 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22998d4f-4bc5-4628-a75e-c9b585fec59a-tigera-ca-bundle\") pod \"calico-kube-controllers-755bdf67f9-xqvvt\" (UID: \"22998d4f-4bc5-4628-a75e-c9b585fec59a\") " pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" Nov 23 22:56:09.972272 kubelet[2757]: I1123 22:56:09.971722 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jzx5\" (UniqueName: \"kubernetes.io/projected/f2f2beaa-e94b-428d-976f-479df6d0fa8f-kube-api-access-7jzx5\") pod \"calico-apiserver-9564bdf65-g8k2q\" (UID: \"f2f2beaa-e94b-428d-976f-479df6d0fa8f\") " pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" Nov 23 22:56:09.972272 kubelet[2757]: I1123 22:56:09.971746 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-backend-key-pair\") pod \"whisker-8686b5dd99-pdrwp\" (UID: \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\") " pod="calico-system/whisker-8686b5dd99-pdrwp" Nov 23 22:56:09.972272 kubelet[2757]: I1123 22:56:09.971786 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4px\" (UniqueName: \"kubernetes.io/projected/22998d4f-4bc5-4628-a75e-c9b585fec59a-kube-api-access-nn4px\") pod \"calico-kube-controllers-755bdf67f9-xqvvt\" (UID: \"22998d4f-4bc5-4628-a75e-c9b585fec59a\") " pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" Nov 23 22:56:09.972272 kubelet[2757]: I1123 22:56:09.971808 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tql6p\" (UniqueName: \"kubernetes.io/projected/ae58e09c-3642-4da5-a2ea-675ec846270c-kube-api-access-tql6p\") pod \"calico-apiserver-9564bdf65-pdtkd\" (UID: \"ae58e09c-3642-4da5-a2ea-675ec846270c\") " pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" Nov 23 22:56:09.972854 kubelet[2757]: I1123 22:56:09.971881 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee-goldmane-ca-bundle\") pod \"goldmane-666569f655-k4l4d\" (UID: \"ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee\") " pod="calico-system/goldmane-666569f655-k4l4d" Nov 23 22:56:09.972854 kubelet[2757]: I1123 22:56:09.971903 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee-goldmane-key-pair\") pod \"goldmane-666569f655-k4l4d\" (UID: \"ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee\") " pod="calico-system/goldmane-666569f655-k4l4d" Nov 23 22:56:09.974566 kubelet[2757]: I1123 22:56:09.973101 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jgm\" (UniqueName: \"kubernetes.io/projected/ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee-kube-api-access-w8jgm\") pod \"goldmane-666569f655-k4l4d\" (UID: \"ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee\") " pod="calico-system/goldmane-666569f655-k4l4d" Nov 23 22:56:09.974566 kubelet[2757]: I1123 22:56:09.973168 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee-config\") pod \"goldmane-666569f655-k4l4d\" (UID: \"ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee\") " pod="calico-system/goldmane-666569f655-k4l4d" Nov 23 22:56:09.974566 kubelet[2757]: I1123 22:56:09.973258 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2f2beaa-e94b-428d-976f-479df6d0fa8f-calico-apiserver-certs\") pod \"calico-apiserver-9564bdf65-g8k2q\" (UID: \"f2f2beaa-e94b-428d-976f-479df6d0fa8f\") " pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" Nov 23 22:56:09.974566 kubelet[2757]: I1123 22:56:09.973309 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-ca-bundle\") pod \"whisker-8686b5dd99-pdrwp\" (UID: \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\") " pod="calico-system/whisker-8686b5dd99-pdrwp" Nov 23 22:56:09.974566 kubelet[2757]: I1123 22:56:09.973328 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2kh8\" (UniqueName: \"kubernetes.io/projected/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-kube-api-access-h2kh8\") pod \"whisker-8686b5dd99-pdrwp\" (UID: \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\") " pod="calico-system/whisker-8686b5dd99-pdrwp" Nov 23 22:56:09.975014 kubelet[2757]: I1123 22:56:09.973354 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ae58e09c-3642-4da5-a2ea-675ec846270c-calico-apiserver-certs\") pod \"calico-apiserver-9564bdf65-pdtkd\" (UID: \"ae58e09c-3642-4da5-a2ea-675ec846270c\") " pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" Nov 23 22:56:10.144532 containerd[1533]: time="2025-11-23T22:56:10.144455794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvrbp,Uid:b21617f8-e4ed-43f4-8cea-63bb326da3a4,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:10.176553 containerd[1533]: time="2025-11-23T22:56:10.176511081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-g8k2q,Uid:f2f2beaa-e94b-428d-976f-479df6d0fa8f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:10.197374 containerd[1533]: time="2025-11-23T22:56:10.197283048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k4l4d,Uid:ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:10.204043 containerd[1533]: time="2025-11-23T22:56:10.204004567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755bdf67f9-xqvvt,Uid:22998d4f-4bc5-4628-a75e-c9b585fec59a,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:10.214864 containerd[1533]: time="2025-11-23T22:56:10.214818038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mnrvc,Uid:e8685cb9-790c-467e-8fb6-e40f7f6bef3f,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:10.232096 containerd[1533]: time="2025-11-23T22:56:10.231335970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-pdtkd,Uid:ae58e09c-3642-4da5-a2ea-675ec846270c,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:10.249900 containerd[1533]: time="2025-11-23T22:56:10.249666974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8686b5dd99-pdrwp,Uid:4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:10.304708 containerd[1533]: time="2025-11-23T22:56:10.304649187Z" level=error msg="Failed to destroy network for sandbox \"8078ba91c9447c851b8cea9a7df34281420fee914d97110a1020b3c96b5000b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.308868 containerd[1533]: time="2025-11-23T22:56:10.308768700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvrbp,Uid:b21617f8-e4ed-43f4-8cea-63bb326da3a4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8078ba91c9447c851b8cea9a7df34281420fee914d97110a1020b3c96b5000b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.309808 kubelet[2757]: E1123 22:56:10.309036 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8078ba91c9447c851b8cea9a7df34281420fee914d97110a1020b3c96b5000b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.309808 kubelet[2757]: E1123 22:56:10.309116 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8078ba91c9447c851b8cea9a7df34281420fee914d97110a1020b3c96b5000b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvrbp" Nov 23 22:56:10.309808 kubelet[2757]: E1123 22:56:10.309136 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8078ba91c9447c851b8cea9a7df34281420fee914d97110a1020b3c96b5000b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvrbp" Nov 23 22:56:10.309980 kubelet[2757]: E1123 22:56:10.309205 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dvrbp_kube-system(b21617f8-e4ed-43f4-8cea-63bb326da3a4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dvrbp_kube-system(b21617f8-e4ed-43f4-8cea-63bb326da3a4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8078ba91c9447c851b8cea9a7df34281420fee914d97110a1020b3c96b5000b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvrbp" podUID="b21617f8-e4ed-43f4-8cea-63bb326da3a4" Nov 23 22:56:10.370761 containerd[1533]: time="2025-11-23T22:56:10.370706755Z" level=error msg="Failed to destroy network for sandbox \"4914d8bde20b29e59ee3f241e67da1e5969602afca9f972211aa70b49e74534b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.373820 containerd[1533]: time="2025-11-23T22:56:10.373758089Z" level=error msg="Failed to destroy network for sandbox \"1517095169e8a4791f0031c42425ffdf3f961c63d81c38591641182289175b1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.374145 containerd[1533]: time="2025-11-23T22:56:10.374106695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755bdf67f9-xqvvt,Uid:22998d4f-4bc5-4628-a75e-c9b585fec59a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4914d8bde20b29e59ee3f241e67da1e5969602afca9f972211aa70b49e74534b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.374533 kubelet[2757]: E1123 22:56:10.374475 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4914d8bde20b29e59ee3f241e67da1e5969602afca9f972211aa70b49e74534b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.375896 kubelet[2757]: E1123 22:56:10.374567 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4914d8bde20b29e59ee3f241e67da1e5969602afca9f972211aa70b49e74534b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" Nov 23 22:56:10.375896 kubelet[2757]: E1123 22:56:10.374592 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4914d8bde20b29e59ee3f241e67da1e5969602afca9f972211aa70b49e74534b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" Nov 23 22:56:10.375896 kubelet[2757]: E1123 22:56:10.374714 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-755bdf67f9-xqvvt_calico-system(22998d4f-4bc5-4628-a75e-c9b585fec59a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-755bdf67f9-xqvvt_calico-system(22998d4f-4bc5-4628-a75e-c9b585fec59a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4914d8bde20b29e59ee3f241e67da1e5969602afca9f972211aa70b49e74534b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:56:10.376260 containerd[1533]: time="2025-11-23T22:56:10.376210013Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-g8k2q,Uid:f2f2beaa-e94b-428d-976f-479df6d0fa8f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517095169e8a4791f0031c42425ffdf3f961c63d81c38591641182289175b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.376723 kubelet[2757]: E1123 22:56:10.376659 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517095169e8a4791f0031c42425ffdf3f961c63d81c38591641182289175b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.376964 kubelet[2757]: E1123 22:56:10.376898 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517095169e8a4791f0031c42425ffdf3f961c63d81c38591641182289175b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" Nov 23 22:56:10.376964 kubelet[2757]: E1123 22:56:10.376924 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1517095169e8a4791f0031c42425ffdf3f961c63d81c38591641182289175b1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" Nov 23 22:56:10.377571 kubelet[2757]: E1123 22:56:10.377092 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9564bdf65-g8k2q_calico-apiserver(f2f2beaa-e94b-428d-976f-479df6d0fa8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9564bdf65-g8k2q_calico-apiserver(f2f2beaa-e94b-428d-976f-479df6d0fa8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1517095169e8a4791f0031c42425ffdf3f961c63d81c38591641182289175b1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:56:10.425079 containerd[1533]: time="2025-11-23T22:56:10.425033516Z" level=error msg="Failed to destroy network for sandbox \"0fd604a906bb800b3c7af077c45136c9ef84c87b5b3ae64f0fc45cf75661ff92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.426684 containerd[1533]: time="2025-11-23T22:56:10.426592504Z" level=error msg="Failed to destroy network for sandbox \"01d3b73bda968d05d4a26c0a61d0bf0e9624d6200fc18807f8e35c081cfdec21\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.427394 containerd[1533]: time="2025-11-23T22:56:10.427178594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k4l4d,Uid:ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd604a906bb800b3c7af077c45136c9ef84c87b5b3ae64f0fc45cf75661ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.428088 kubelet[2757]: E1123 22:56:10.427689 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd604a906bb800b3c7af077c45136c9ef84c87b5b3ae64f0fc45cf75661ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.428088 kubelet[2757]: E1123 22:56:10.427751 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd604a906bb800b3c7af077c45136c9ef84c87b5b3ae64f0fc45cf75661ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k4l4d" Nov 23 22:56:10.428088 kubelet[2757]: E1123 22:56:10.427770 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fd604a906bb800b3c7af077c45136c9ef84c87b5b3ae64f0fc45cf75661ff92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-k4l4d" Nov 23 22:56:10.428240 kubelet[2757]: E1123 22:56:10.427822 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-k4l4d_calico-system(ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-k4l4d_calico-system(ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fd604a906bb800b3c7af077c45136c9ef84c87b5b3ae64f0fc45cf75661ff92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:56:10.429407 containerd[1533]: time="2025-11-23T22:56:10.429371113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8686b5dd99-pdrwp,Uid:4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3b73bda968d05d4a26c0a61d0bf0e9624d6200fc18807f8e35c081cfdec21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.429891 kubelet[2757]: E1123 22:56:10.429849 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3b73bda968d05d4a26c0a61d0bf0e9624d6200fc18807f8e35c081cfdec21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.430023 kubelet[2757]: E1123 22:56:10.430006 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3b73bda968d05d4a26c0a61d0bf0e9624d6200fc18807f8e35c081cfdec21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8686b5dd99-pdrwp" Nov 23 22:56:10.430116 kubelet[2757]: E1123 22:56:10.430101 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01d3b73bda968d05d4a26c0a61d0bf0e9624d6200fc18807f8e35c081cfdec21\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8686b5dd99-pdrwp" Nov 23 22:56:10.430327 kubelet[2757]: E1123 22:56:10.430276 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8686b5dd99-pdrwp_calico-system(4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8686b5dd99-pdrwp_calico-system(4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01d3b73bda968d05d4a26c0a61d0bf0e9624d6200fc18807f8e35c081cfdec21\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8686b5dd99-pdrwp" podUID="4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b" Nov 23 22:56:10.432233 containerd[1533]: time="2025-11-23T22:56:10.432151322Z" level=error msg="Failed to destroy network for sandbox \"7abc90c0917bac2d027dcfa59cea219114abe3f3eeaf7962ee28bbf19fc1b593\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.434288 containerd[1533]: time="2025-11-23T22:56:10.434131357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mnrvc,Uid:e8685cb9-790c-467e-8fb6-e40f7f6bef3f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7abc90c0917bac2d027dcfa59cea219114abe3f3eeaf7962ee28bbf19fc1b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.434422 kubelet[2757]: E1123 22:56:10.434391 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7abc90c0917bac2d027dcfa59cea219114abe3f3eeaf7962ee28bbf19fc1b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.434465 kubelet[2757]: E1123 22:56:10.434440 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7abc90c0917bac2d027dcfa59cea219114abe3f3eeaf7962ee28bbf19fc1b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mnrvc" Nov 23 22:56:10.434494 kubelet[2757]: E1123 22:56:10.434461 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7abc90c0917bac2d027dcfa59cea219114abe3f3eeaf7962ee28bbf19fc1b593\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mnrvc" Nov 23 22:56:10.434531 kubelet[2757]: E1123 22:56:10.434504 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mnrvc_kube-system(e8685cb9-790c-467e-8fb6-e40f7f6bef3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mnrvc_kube-system(e8685cb9-790c-467e-8fb6-e40f7f6bef3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7abc90c0917bac2d027dcfa59cea219114abe3f3eeaf7962ee28bbf19fc1b593\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mnrvc" podUID="e8685cb9-790c-467e-8fb6-e40f7f6bef3f" Nov 23 22:56:10.437596 containerd[1533]: time="2025-11-23T22:56:10.437478976Z" level=error msg="Failed to destroy network for sandbox \"e7402e5bcac9003b0c148548ca3ba3821ec75f6685025bc2d4a4153614df088a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.439747 containerd[1533]: time="2025-11-23T22:56:10.439592894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-pdtkd,Uid:ae58e09c-3642-4da5-a2ea-675ec846270c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7402e5bcac9003b0c148548ca3ba3821ec75f6685025bc2d4a4153614df088a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.440522 kubelet[2757]: E1123 22:56:10.440440 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7402e5bcac9003b0c148548ca3ba3821ec75f6685025bc2d4a4153614df088a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.441654 kubelet[2757]: E1123 22:56:10.440769 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7402e5bcac9003b0c148548ca3ba3821ec75f6685025bc2d4a4153614df088a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" Nov 23 22:56:10.441759 kubelet[2757]: E1123 22:56:10.441711 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7402e5bcac9003b0c148548ca3ba3821ec75f6685025bc2d4a4153614df088a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" Nov 23 22:56:10.442099 kubelet[2757]: E1123 22:56:10.442056 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7402e5bcac9003b0c148548ca3ba3821ec75f6685025bc2d4a4153614df088a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:56:10.490962 containerd[1533]: time="2025-11-23T22:56:10.489614738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 22:56:11.332954 systemd[1]: Created slice kubepods-besteffort-pod21480ae4_8b64_4bd3_93f8_a08b2cf68bf0.slice - libcontainer container kubepods-besteffort-pod21480ae4_8b64_4bd3_93f8_a08b2cf68bf0.slice. Nov 23 22:56:11.336389 containerd[1533]: time="2025-11-23T22:56:11.336325199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h75fx,Uid:21480ae4-8b64-4bd3-93f8-a08b2cf68bf0,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:11.404408 containerd[1533]: time="2025-11-23T22:56:11.402793760Z" level=error msg="Failed to destroy network for sandbox \"9beac0dcad23b749554ae44a08249ffce22e3b0b60f35bf7e760c652b617edf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:11.404905 systemd[1]: run-netns-cni\x2d41c7192b\x2d096c\x2d5fbd\x2d0b20\x2dbd664b4821ad.mount: Deactivated successfully. Nov 23 22:56:11.407090 containerd[1533]: time="2025-11-23T22:56:11.407014114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h75fx,Uid:21480ae4-8b64-4bd3-93f8-a08b2cf68bf0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beac0dcad23b749554ae44a08249ffce22e3b0b60f35bf7e760c652b617edf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:11.407604 kubelet[2757]: E1123 22:56:11.407331 2757 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beac0dcad23b749554ae44a08249ffce22e3b0b60f35bf7e760c652b617edf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:11.407604 kubelet[2757]: E1123 22:56:11.407391 2757 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beac0dcad23b749554ae44a08249ffce22e3b0b60f35bf7e760c652b617edf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:11.407604 kubelet[2757]: E1123 22:56:11.407412 2757 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9beac0dcad23b749554ae44a08249ffce22e3b0b60f35bf7e760c652b617edf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h75fx" Nov 23 22:56:11.409046 kubelet[2757]: E1123 22:56:11.407460 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9beac0dcad23b749554ae44a08249ffce22e3b0b60f35bf7e760c652b617edf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:15.391429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000659626.mount: Deactivated successfully. Nov 23 22:56:15.418949 containerd[1533]: time="2025-11-23T22:56:15.418887109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:15.420100 containerd[1533]: time="2025-11-23T22:56:15.420020127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 22:56:15.421658 containerd[1533]: time="2025-11-23T22:56:15.421070465Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:15.423477 containerd[1533]: time="2025-11-23T22:56:15.423421464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:15.424173 containerd[1533]: time="2025-11-23T22:56:15.424145236Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.934441416s" Nov 23 22:56:15.424320 containerd[1533]: time="2025-11-23T22:56:15.424302359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 22:56:15.448365 containerd[1533]: time="2025-11-23T22:56:15.448311719Z" level=info msg="CreateContainer within sandbox \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 22:56:15.467649 containerd[1533]: time="2025-11-23T22:56:15.467081671Z" level=info msg="Container 0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:15.471071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094446696.mount: Deactivated successfully. Nov 23 22:56:15.486012 containerd[1533]: time="2025-11-23T22:56:15.485819623Z" level=info msg="CreateContainer within sandbox \"c9b6dc605000f9c1b08f02effc3ba4c41d617d7c464837fa9401efb3f61899f3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a\"" Nov 23 22:56:15.487477 containerd[1533]: time="2025-11-23T22:56:15.486825360Z" level=info msg="StartContainer for \"0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a\"" Nov 23 22:56:15.490070 containerd[1533]: time="2025-11-23T22:56:15.490029813Z" level=info msg="connecting to shim 0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a" address="unix:///run/containerd/s/1d5bb6b4f4f2c23bd45abbeeab14c4bd3a0ab2eda132e54085c3c2ed30959392" protocol=ttrpc version=3 Nov 23 22:56:15.568137 systemd[1]: Started cri-containerd-0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a.scope - libcontainer container 0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a. Nov 23 22:56:15.679573 containerd[1533]: time="2025-11-23T22:56:15.679442487Z" level=info msg="StartContainer for \"0277c255f1ea2708716b6371aa920b48a07c0ae2bba5a433bb88b5e327e1e89a\" returns successfully" Nov 23 22:56:15.819670 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 22:56:15.819818 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 22:56:16.121403 kubelet[2757]: I1123 22:56:16.121349 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2kh8\" (UniqueName: \"kubernetes.io/projected/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-kube-api-access-h2kh8\") pod \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\" (UID: \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\") " Nov 23 22:56:16.121403 kubelet[2757]: I1123 22:56:16.121409 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-backend-key-pair\") pod \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\" (UID: \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\") " Nov 23 22:56:16.122307 kubelet[2757]: I1123 22:56:16.121436 2757 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-ca-bundle\") pod \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\" (UID: \"4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b\") " Nov 23 22:56:16.128695 kubelet[2757]: I1123 22:56:16.127928 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b" (UID: "4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 22:56:16.128829 kubelet[2757]: I1123 22:56:16.128742 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-kube-api-access-h2kh8" (OuterVolumeSpecName: "kube-api-access-h2kh8") pod "4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b" (UID: "4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b"). InnerVolumeSpecName "kube-api-access-h2kh8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 22:56:16.130848 kubelet[2757]: I1123 22:56:16.130793 2757 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b" (UID: "4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 22:56:16.222879 kubelet[2757]: I1123 22:56:16.222833 2757 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h2kh8\" (UniqueName: \"kubernetes.io/projected/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-kube-api-access-h2kh8\") on node \"ci-4459-1-2-3-c3120372ad\" DevicePath \"\"" Nov 23 22:56:16.222879 kubelet[2757]: I1123 22:56:16.222873 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-backend-key-pair\") on node \"ci-4459-1-2-3-c3120372ad\" DevicePath \"\"" Nov 23 22:56:16.222879 kubelet[2757]: I1123 22:56:16.222885 2757 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b-whisker-ca-bundle\") on node \"ci-4459-1-2-3-c3120372ad\" DevicePath \"\"" Nov 23 22:56:16.397853 systemd[1]: var-lib-kubelet-pods-4d9ba8ba\x2d5ef4\x2d48c0\x2da1ef\x2d9e82a35d575b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh2kh8.mount: Deactivated successfully. Nov 23 22:56:16.398412 systemd[1]: var-lib-kubelet-pods-4d9ba8ba\x2d5ef4\x2d48c0\x2da1ef\x2d9e82a35d575b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 22:56:16.532280 systemd[1]: Removed slice kubepods-besteffort-pod4d9ba8ba_5ef4_48c0_a1ef_9e82a35d575b.slice - libcontainer container kubepods-besteffort-pod4d9ba8ba_5ef4_48c0_a1ef_9e82a35d575b.slice. Nov 23 22:56:16.552542 kubelet[2757]: I1123 22:56:16.552435 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8brhz" podStartSLOduration=2.468822474 podStartE2EDuration="15.552398839s" podCreationTimestamp="2025-11-23 22:56:01 +0000 UTC" firstStartedPulling="2025-11-23 22:56:02.341868293 +0000 UTC m=+27.170822721" lastFinishedPulling="2025-11-23 22:56:15.425444658 +0000 UTC m=+40.254399086" observedRunningTime="2025-11-23 22:56:16.550099041 +0000 UTC m=+41.379053509" watchObservedRunningTime="2025-11-23 22:56:16.552398839 +0000 UTC m=+41.381353267" Nov 23 22:56:16.639715 systemd[1]: Created slice kubepods-besteffort-pod14be2267_c3d8_4884_b5c4_de72ade3d8e8.slice - libcontainer container kubepods-besteffort-pod14be2267_c3d8_4884_b5c4_de72ade3d8e8.slice. Nov 23 22:56:16.726573 kubelet[2757]: I1123 22:56:16.726317 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/14be2267-c3d8-4884-b5c4-de72ade3d8e8-whisker-backend-key-pair\") pod \"whisker-64cc7645c9-8ptpv\" (UID: \"14be2267-c3d8-4884-b5c4-de72ade3d8e8\") " pod="calico-system/whisker-64cc7645c9-8ptpv" Nov 23 22:56:16.726573 kubelet[2757]: I1123 22:56:16.726475 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjh6s\" (UniqueName: \"kubernetes.io/projected/14be2267-c3d8-4884-b5c4-de72ade3d8e8-kube-api-access-sjh6s\") pod \"whisker-64cc7645c9-8ptpv\" (UID: \"14be2267-c3d8-4884-b5c4-de72ade3d8e8\") " pod="calico-system/whisker-64cc7645c9-8ptpv" Nov 23 22:56:16.726826 kubelet[2757]: I1123 22:56:16.726551 2757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14be2267-c3d8-4884-b5c4-de72ade3d8e8-whisker-ca-bundle\") pod \"whisker-64cc7645c9-8ptpv\" (UID: \"14be2267-c3d8-4884-b5c4-de72ade3d8e8\") " pod="calico-system/whisker-64cc7645c9-8ptpv" Nov 23 22:56:16.945607 containerd[1533]: time="2025-11-23T22:56:16.945487150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64cc7645c9-8ptpv,Uid:14be2267-c3d8-4884-b5c4-de72ade3d8e8,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:17.164769 systemd-networkd[1417]: caliba8309e8144: Link UP Nov 23 22:56:17.166093 systemd-networkd[1417]: caliba8309e8144: Gained carrier Nov 23 22:56:17.190646 containerd[1533]: 2025-11-23 22:56:16.974 [INFO][3797] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:17.190646 containerd[1533]: 2025-11-23 22:56:17.023 [INFO][3797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0 whisker-64cc7645c9- calico-system 14be2267-c3d8-4884-b5c4-de72ade3d8e8 896 0 2025-11-23 22:56:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64cc7645c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad whisker-64cc7645c9-8ptpv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliba8309e8144 [] [] }} ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-" Nov 23 22:56:17.190646 containerd[1533]: 2025-11-23 22:56:17.023 [INFO][3797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.190646 containerd[1533]: 2025-11-23 22:56:17.076 [INFO][3809] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" HandleID="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Workload="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.076 [INFO][3809] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" HandleID="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Workload="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-3-c3120372ad", "pod":"whisker-64cc7645c9-8ptpv", "timestamp":"2025-11-23 22:56:17.076064646 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.076 [INFO][3809] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.076 [INFO][3809] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.076 [INFO][3809] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.098 [INFO][3809] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.110 [INFO][3809] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.120 [INFO][3809] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.124 [INFO][3809] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.190942 containerd[1533]: 2025-11-23 22:56:17.127 [INFO][3809] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.127 [INFO][3809] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.129 [INFO][3809] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03 Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.137 [INFO][3809] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.147 [INFO][3809] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.129/26] block=192.168.107.128/26 handle="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.150 [INFO][3809] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.129/26] handle="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.150 [INFO][3809] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:17.191126 containerd[1533]: 2025-11-23 22:56:17.150 [INFO][3809] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.129/26] IPv6=[] ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" HandleID="k8s-pod-network.8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Workload="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.191274 containerd[1533]: 2025-11-23 22:56:17.153 [INFO][3797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0", GenerateName:"whisker-64cc7645c9-", Namespace:"calico-system", SelfLink:"", UID:"14be2267-c3d8-4884-b5c4-de72ade3d8e8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64cc7645c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"whisker-64cc7645c9-8ptpv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba8309e8144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:17.191274 containerd[1533]: 2025-11-23 22:56:17.153 [INFO][3797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.129/32] ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.191345 containerd[1533]: 2025-11-23 22:56:17.153 [INFO][3797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba8309e8144 ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.191345 containerd[1533]: 2025-11-23 22:56:17.166 [INFO][3797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.191383 containerd[1533]: 2025-11-23 22:56:17.167 [INFO][3797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0", GenerateName:"whisker-64cc7645c9-", Namespace:"calico-system", SelfLink:"", UID:"14be2267-c3d8-4884-b5c4-de72ade3d8e8", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64cc7645c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03", Pod:"whisker-64cc7645c9-8ptpv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.107.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliba8309e8144", MAC:"ea:64:64:14:2f:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:17.191429 containerd[1533]: 2025-11-23 22:56:17.184 [INFO][3797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" Namespace="calico-system" Pod="whisker-64cc7645c9-8ptpv" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-whisker--64cc7645c9--8ptpv-eth0" Nov 23 22:56:17.244857 containerd[1533]: time="2025-11-23T22:56:17.244670071Z" level=info msg="connecting to shim 8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03" address="unix:///run/containerd/s/caea30b73e799aee2e83edb33e18dfb8edb464067ed0a3ce9c9cd9c8c141b13f" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:17.274875 systemd[1]: Started cri-containerd-8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03.scope - libcontainer container 8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03. Nov 23 22:56:17.338059 kubelet[2757]: I1123 22:56:17.337607 2757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b" path="/var/lib/kubelet/pods/4d9ba8ba-5ef4-48c0-a1ef-9e82a35d575b/volumes" Nov 23 22:56:17.440164 containerd[1533]: time="2025-11-23T22:56:17.438972154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64cc7645c9-8ptpv,Uid:14be2267-c3d8-4884-b5c4-de72ade3d8e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"8eb07a0a4d7a7c5ee8209a0ec535f3f9f831a6f6e7e49690d5c02ce4d209ec03\"" Nov 23 22:56:17.443848 containerd[1533]: time="2025-11-23T22:56:17.443809153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:56:17.791445 containerd[1533]: time="2025-11-23T22:56:17.791045646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:17.793468 containerd[1533]: time="2025-11-23T22:56:17.793406845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:56:17.793776 containerd[1533]: time="2025-11-23T22:56:17.793516246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:56:17.796148 kubelet[2757]: E1123 22:56:17.796090 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:17.796312 kubelet[2757]: E1123 22:56:17.796185 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:17.808276 kubelet[2757]: E1123 22:56:17.808188 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6b66b90b128846799f19a7f06b34548e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:17.811045 containerd[1533]: time="2025-11-23T22:56:17.810903850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:56:18.113821 containerd[1533]: time="2025-11-23T22:56:18.113586238Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:18.116569 containerd[1533]: time="2025-11-23T22:56:18.116405403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:56:18.116569 containerd[1533]: time="2025-11-23T22:56:18.116463444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:18.116872 kubelet[2757]: E1123 22:56:18.116814 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:18.116956 kubelet[2757]: E1123 22:56:18.116886 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:18.117554 kubelet[2757]: E1123 22:56:18.117082 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:18.118887 kubelet[2757]: E1123 22:56:18.118710 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:56:18.230815 systemd-networkd[1417]: caliba8309e8144: Gained IPv6LL Nov 23 22:56:18.552234 kubelet[2757]: E1123 22:56:18.552119 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:56:21.324602 containerd[1533]: time="2025-11-23T22:56:21.324258514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-g8k2q,Uid:f2f2beaa-e94b-428d-976f-479df6d0fa8f,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:21.471974 systemd-networkd[1417]: calib6d4d35cbb2: Link UP Nov 23 22:56:21.473093 systemd-networkd[1417]: calib6d4d35cbb2: Gained carrier Nov 23 22:56:21.491409 containerd[1533]: 2025-11-23 22:56:21.358 [INFO][4068] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:21.491409 containerd[1533]: 2025-11-23 22:56:21.376 [INFO][4068] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0 calico-apiserver-9564bdf65- calico-apiserver f2f2beaa-e94b-428d-976f-479df6d0fa8f 819 0 2025-11-23 22:55:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9564bdf65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad calico-apiserver-9564bdf65-g8k2q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6d4d35cbb2 [] [] }} ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-" Nov 23 22:56:21.491409 containerd[1533]: 2025-11-23 22:56:21.376 [INFO][4068] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.491409 containerd[1533]: 2025-11-23 22:56:21.407 [INFO][4080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" HandleID="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.407 [INFO][4080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" HandleID="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-2-3-c3120372ad", "pod":"calico-apiserver-9564bdf65-g8k2q", "timestamp":"2025-11-23 22:56:21.407757977 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.408 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.408 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.408 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.420 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.427 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.435 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.438 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491684 containerd[1533]: 2025-11-23 22:56:21.444 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.444 [INFO][4080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.447 [INFO][4080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.453 [INFO][4080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.461 [INFO][4080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.130/26] block=192.168.107.128/26 handle="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.462 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.130/26] handle="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.462 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:21.491930 containerd[1533]: 2025-11-23 22:56:21.462 [INFO][4080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.130/26] IPv6=[] ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" HandleID="k8s-pod-network.002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.492142 containerd[1533]: 2025-11-23 22:56:21.467 [INFO][4068] cni-plugin/k8s.go 418: Populated endpoint ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0", GenerateName:"calico-apiserver-9564bdf65-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2f2beaa-e94b-428d-976f-479df6d0fa8f", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9564bdf65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"calico-apiserver-9564bdf65-g8k2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6d4d35cbb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:21.492214 containerd[1533]: 2025-11-23 22:56:21.467 [INFO][4068] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.130/32] ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.492214 containerd[1533]: 2025-11-23 22:56:21.467 [INFO][4068] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6d4d35cbb2 ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.492214 containerd[1533]: 2025-11-23 22:56:21.473 [INFO][4068] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.492285 containerd[1533]: 2025-11-23 22:56:21.474 [INFO][4068] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0", GenerateName:"calico-apiserver-9564bdf65-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2f2beaa-e94b-428d-976f-479df6d0fa8f", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9564bdf65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e", Pod:"calico-apiserver-9564bdf65-g8k2q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6d4d35cbb2", MAC:"fa:18:2a:eb:cc:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:21.492334 containerd[1533]: 2025-11-23 22:56:21.487 [INFO][4068] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-g8k2q" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--g8k2q-eth0" Nov 23 22:56:21.517010 containerd[1533]: time="2025-11-23T22:56:21.516857320Z" level=info msg="connecting to shim 002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e" address="unix:///run/containerd/s/8b9815f8b38390abefeaa7a9d59b6bd8a560bfc0c812fa0c3fd7d3c6b8767c20" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:21.549953 systemd[1]: Started cri-containerd-002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e.scope - libcontainer container 002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e. Nov 23 22:56:21.596168 containerd[1533]: time="2025-11-23T22:56:21.596046915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-g8k2q,Uid:f2f2beaa-e94b-428d-976f-479df6d0fa8f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"002278943d9ce24846d2698c37ced4684af86e2430a9bc71f589b398110d232e\"" Nov 23 22:56:21.599688 containerd[1533]: time="2025-11-23T22:56:21.598950641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:21.926071 containerd[1533]: time="2025-11-23T22:56:21.925972145Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:21.927605 containerd[1533]: time="2025-11-23T22:56:21.927523889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:21.927752 containerd[1533]: time="2025-11-23T22:56:21.927684731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:21.928694 kubelet[2757]: E1123 22:56:21.927926 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:21.928694 kubelet[2757]: E1123 22:56:21.928059 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:21.928694 kubelet[2757]: E1123 22:56:21.928263 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jzx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-g8k2q_calico-apiserver(f2f2beaa-e94b-428d-976f-479df6d0fa8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:21.930017 kubelet[2757]: E1123 22:56:21.929966 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:56:22.324148 containerd[1533]: time="2025-11-23T22:56:22.324010226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-pdtkd,Uid:ae58e09c-3642-4da5-a2ea-675ec846270c,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:22.472038 systemd-networkd[1417]: cali9019e6a21ff: Link UP Nov 23 22:56:22.478057 systemd-networkd[1417]: cali9019e6a21ff: Gained carrier Nov 23 22:56:22.502095 containerd[1533]: 2025-11-23 22:56:22.357 [INFO][4162] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:22.502095 containerd[1533]: 2025-11-23 22:56:22.374 [INFO][4162] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0 calico-apiserver-9564bdf65- calico-apiserver ae58e09c-3642-4da5-a2ea-675ec846270c 821 0 2025-11-23 22:55:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9564bdf65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad calico-apiserver-9564bdf65-pdtkd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9019e6a21ff [] [] }} ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-" Nov 23 22:56:22.502095 containerd[1533]: 2025-11-23 22:56:22.374 [INFO][4162] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.502095 containerd[1533]: 2025-11-23 22:56:22.408 [INFO][4172] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" HandleID="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.408 [INFO][4172] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" HandleID="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-2-3-c3120372ad", "pod":"calico-apiserver-9564bdf65-pdtkd", "timestamp":"2025-11-23 22:56:22.408130046 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.408 [INFO][4172] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.408 [INFO][4172] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.408 [INFO][4172] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.423 [INFO][4172] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.430 [INFO][4172] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.437 [INFO][4172] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.439 [INFO][4172] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502565 containerd[1533]: 2025-11-23 22:56:22.443 [INFO][4172] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.444 [INFO][4172] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.447 [INFO][4172] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539 Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.454 [INFO][4172] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.461 [INFO][4172] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.131/26] block=192.168.107.128/26 handle="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.461 [INFO][4172] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.131/26] handle="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.462 [INFO][4172] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:22.502819 containerd[1533]: 2025-11-23 22:56:22.462 [INFO][4172] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.131/26] IPv6=[] ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" HandleID="k8s-pod-network.383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.503548 containerd[1533]: 2025-11-23 22:56:22.464 [INFO][4162] cni-plugin/k8s.go 418: Populated endpoint ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0", GenerateName:"calico-apiserver-9564bdf65-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae58e09c-3642-4da5-a2ea-675ec846270c", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9564bdf65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"calico-apiserver-9564bdf65-pdtkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9019e6a21ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.503915 containerd[1533]: 2025-11-23 22:56:22.465 [INFO][4162] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.131/32] ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.503915 containerd[1533]: 2025-11-23 22:56:22.465 [INFO][4162] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9019e6a21ff ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.503915 containerd[1533]: 2025-11-23 22:56:22.477 [INFO][4162] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.504874 containerd[1533]: 2025-11-23 22:56:22.478 [INFO][4162] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0", GenerateName:"calico-apiserver-9564bdf65-", Namespace:"calico-apiserver", SelfLink:"", UID:"ae58e09c-3642-4da5-a2ea-675ec846270c", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9564bdf65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539", Pod:"calico-apiserver-9564bdf65-pdtkd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.107.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9019e6a21ff", MAC:"6a:e0:6d:08:85:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.504961 containerd[1533]: 2025-11-23 22:56:22.498 [INFO][4162] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" Namespace="calico-apiserver" Pod="calico-apiserver-9564bdf65-pdtkd" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--apiserver--9564bdf65--pdtkd-eth0" Nov 23 22:56:22.544334 containerd[1533]: time="2025-11-23T22:56:22.544181908Z" level=info msg="connecting to shim 383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539" address="unix:///run/containerd/s/484f04eb320237c7d151e054a6f5c241e0c77c9d7e92eb121f5aaace6b49feaf" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:22.568231 kubelet[2757]: E1123 22:56:22.568170 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:56:22.601297 systemd[1]: Started cri-containerd-383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539.scope - libcontainer container 383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539. Nov 23 22:56:22.684368 containerd[1533]: time="2025-11-23T22:56:22.684232952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9564bdf65-pdtkd,Uid:ae58e09c-3642-4da5-a2ea-675ec846270c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"383f25fd0181bc7e66a125f863e7b6f19b9261109a617360b375628f66208539\"" Nov 23 22:56:22.686767 containerd[1533]: time="2025-11-23T22:56:22.686720711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:22.776849 systemd-networkd[1417]: calib6d4d35cbb2: Gained IPv6LL Nov 23 22:56:23.023229 containerd[1533]: time="2025-11-23T22:56:23.022973863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:23.028098 containerd[1533]: time="2025-11-23T22:56:23.027916099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:23.029414 containerd[1533]: time="2025-11-23T22:56:23.028061901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:23.029590 kubelet[2757]: E1123 22:56:23.028715 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:23.029590 kubelet[2757]: E1123 22:56:23.028784 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:23.029590 kubelet[2757]: E1123 22:56:23.028975 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tql6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:23.030397 kubelet[2757]: E1123 22:56:23.030342 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:56:23.325690 containerd[1533]: time="2025-11-23T22:56:23.324982364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755bdf67f9-xqvvt,Uid:22998d4f-4bc5-4628-a75e-c9b585fec59a,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:23.513911 systemd-networkd[1417]: cali44d31a823b0: Link UP Nov 23 22:56:23.514778 systemd-networkd[1417]: cali44d31a823b0: Gained carrier Nov 23 22:56:23.534669 containerd[1533]: 2025-11-23 22:56:23.376 [INFO][4257] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:23.534669 containerd[1533]: 2025-11-23 22:56:23.408 [INFO][4257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0 calico-kube-controllers-755bdf67f9- calico-system 22998d4f-4bc5-4628-a75e-c9b585fec59a 820 0 2025-11-23 22:56:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:755bdf67f9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad calico-kube-controllers-755bdf67f9-xqvvt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali44d31a823b0 [] [] }} ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-" Nov 23 22:56:23.534669 containerd[1533]: 2025-11-23 22:56:23.408 [INFO][4257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.534669 containerd[1533]: 2025-11-23 22:56:23.441 [INFO][4268] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" HandleID="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.441 [INFO][4268] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" HandleID="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-3-c3120372ad", "pod":"calico-kube-controllers-755bdf67f9-xqvvt", "timestamp":"2025-11-23 22:56:23.441652149 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.443 [INFO][4268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.443 [INFO][4268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.443 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.459 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.469 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.476 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.479 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535540 containerd[1533]: 2025-11-23 22:56:23.483 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.483 [INFO][4268] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.486 [INFO][4268] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3 Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.497 [INFO][4268] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.506 [INFO][4268] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.132/26] block=192.168.107.128/26 handle="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.506 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.132/26] handle="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.506 [INFO][4268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:23.535933 containerd[1533]: 2025-11-23 22:56:23.507 [INFO][4268] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.132/26] IPv6=[] ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" HandleID="k8s-pod-network.de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Workload="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.536172 containerd[1533]: 2025-11-23 22:56:23.510 [INFO][4257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0", GenerateName:"calico-kube-controllers-755bdf67f9-", Namespace:"calico-system", SelfLink:"", UID:"22998d4f-4bc5-4628-a75e-c9b585fec59a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755bdf67f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"calico-kube-controllers-755bdf67f9-xqvvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44d31a823b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:23.536278 containerd[1533]: 2025-11-23 22:56:23.510 [INFO][4257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.132/32] ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.536278 containerd[1533]: 2025-11-23 22:56:23.510 [INFO][4257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44d31a823b0 ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.536278 containerd[1533]: 2025-11-23 22:56:23.515 [INFO][4257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.536441 containerd[1533]: 2025-11-23 22:56:23.515 [INFO][4257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0", GenerateName:"calico-kube-controllers-755bdf67f9-", Namespace:"calico-system", SelfLink:"", UID:"22998d4f-4bc5-4628-a75e-c9b585fec59a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"755bdf67f9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3", Pod:"calico-kube-controllers-755bdf67f9-xqvvt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali44d31a823b0", MAC:"16:98:4f:33:2d:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:23.536683 containerd[1533]: 2025-11-23 22:56:23.532 [INFO][4257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" Namespace="calico-system" Pod="calico-kube-controllers-755bdf67f9-xqvvt" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-calico--kube--controllers--755bdf67f9--xqvvt-eth0" Nov 23 22:56:23.568969 containerd[1533]: time="2025-11-23T22:56:23.568701213Z" level=info msg="connecting to shim de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3" address="unix:///run/containerd/s/6da25ab878e3bc73f73e9928a4843f8e6c0bb66f4079bc6ecef8692ffa80c8b9" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:23.575919 kubelet[2757]: E1123 22:56:23.575468 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:56:23.577449 kubelet[2757]: E1123 22:56:23.577393 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:56:23.625899 systemd[1]: Started cri-containerd-de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3.scope - libcontainer container de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3. Nov 23 22:56:23.725593 containerd[1533]: time="2025-11-23T22:56:23.725446211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-755bdf67f9-xqvvt,Uid:22998d4f-4bc5-4628-a75e-c9b585fec59a,Namespace:calico-system,Attempt:0,} returns sandbox id \"de65cd8a9c82741ed63ff1d2be6abba3913fb32c341de4ee682cbcdca9080ca3\"" Nov 23 22:56:23.728723 containerd[1533]: time="2025-11-23T22:56:23.728279215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:56:24.071050 containerd[1533]: time="2025-11-23T22:56:24.070958688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:24.072794 containerd[1533]: time="2025-11-23T22:56:24.072734795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:56:24.072948 containerd[1533]: time="2025-11-23T22:56:24.072854917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:24.073177 kubelet[2757]: E1123 22:56:24.073135 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:24.073499 kubelet[2757]: E1123 22:56:24.073192 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:24.073499 kubelet[2757]: E1123 22:56:24.073379 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn4px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-755bdf67f9-xqvvt_calico-system(22998d4f-4bc5-4628-a75e-c9b585fec59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:24.074533 kubelet[2757]: E1123 22:56:24.074484 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:56:24.118875 systemd-networkd[1417]: cali9019e6a21ff: Gained IPv6LL Nov 23 22:56:24.462062 systemd[1]: Started sshd@7-91.98.91.202:22-185.156.73.233:54092.service - OpenSSH per-connection server daemon (185.156.73.233:54092). Nov 23 22:56:24.582648 kubelet[2757]: E1123 22:56:24.582031 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:56:24.583109 kubelet[2757]: E1123 22:56:24.583038 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:56:25.207000 systemd-networkd[1417]: cali44d31a823b0: Gained IPv6LL Nov 23 22:56:25.325725 containerd[1533]: time="2025-11-23T22:56:25.325569135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvrbp,Uid:b21617f8-e4ed-43f4-8cea-63bb326da3a4,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:25.327130 containerd[1533]: time="2025-11-23T22:56:25.327080238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h75fx,Uid:21480ae4-8b64-4bd3-93f8-a08b2cf68bf0,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:25.329057 containerd[1533]: time="2025-11-23T22:56:25.328446859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mnrvc,Uid:e8685cb9-790c-467e-8fb6-e40f7f6bef3f,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:25.330819 containerd[1533]: time="2025-11-23T22:56:25.330725613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k4l4d,Uid:ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:25.593698 kubelet[2757]: E1123 22:56:25.593549 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:56:25.772483 systemd-networkd[1417]: cali3a73919f2c5: Link UP Nov 23 22:56:25.773712 systemd-networkd[1417]: cali3a73919f2c5: Gained carrier Nov 23 22:56:25.805610 containerd[1533]: 2025-11-23 22:56:25.481 [INFO][4381] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:25.805610 containerd[1533]: 2025-11-23 22:56:25.522 [INFO][4381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0 goldmane-666569f655- calico-system ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee 825 0 2025-11-23 22:55:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad goldmane-666569f655-k4l4d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3a73919f2c5 [] [] }} ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-" Nov 23 22:56:25.805610 containerd[1533]: 2025-11-23 22:56:25.522 [INFO][4381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.805610 containerd[1533]: 2025-11-23 22:56:25.652 [INFO][4429] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" HandleID="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Workload="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.653 [INFO][4429] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" HandleID="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Workload="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-3-c3120372ad", "pod":"goldmane-666569f655-k4l4d", "timestamp":"2025-11-23 22:56:25.65291813 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.653 [INFO][4429] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.653 [INFO][4429] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.653 [INFO][4429] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.689 [INFO][4429] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.704 [INFO][4429] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.717 [INFO][4429] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.721 [INFO][4429] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806003 containerd[1533]: 2025-11-23 22:56:25.726 [INFO][4429] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.726 [INFO][4429] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.729 [INFO][4429] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262 Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.741 [INFO][4429] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.754 [INFO][4429] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.133/26] block=192.168.107.128/26 handle="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.754 [INFO][4429] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.133/26] handle="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.754 [INFO][4429] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:25.806219 containerd[1533]: 2025-11-23 22:56:25.754 [INFO][4429] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.133/26] IPv6=[] ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" HandleID="k8s-pod-network.7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Workload="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.806370 containerd[1533]: 2025-11-23 22:56:25.767 [INFO][4381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"goldmane-666569f655-k4l4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3a73919f2c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:25.806426 containerd[1533]: 2025-11-23 22:56:25.767 [INFO][4381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.133/32] ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.806426 containerd[1533]: 2025-11-23 22:56:25.767 [INFO][4381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a73919f2c5 ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.806426 containerd[1533]: 2025-11-23 22:56:25.775 [INFO][4381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.806492 containerd[1533]: 2025-11-23 22:56:25.778 [INFO][4381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262", Pod:"goldmane-666569f655-k4l4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.107.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3a73919f2c5", MAC:"e2:f9:a1:f8:e5:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:25.806589 containerd[1533]: 2025-11-23 22:56:25.800 [INFO][4381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" Namespace="calico-system" Pod="goldmane-666569f655-k4l4d" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-goldmane--666569f655--k4l4d-eth0" Nov 23 22:56:25.858529 containerd[1533]: time="2025-11-23T22:56:25.858395975Z" level=info msg="connecting to shim 7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262" address="unix:///run/containerd/s/93d93b6846cfdad0ddfaca685731e4384639989b5385f22ad1f5d53308cfefaa" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:25.885790 systemd-networkd[1417]: cali683d9fe4958: Link UP Nov 23 22:56:25.886893 systemd-networkd[1417]: cali683d9fe4958: Gained carrier Nov 23 22:56:25.937597 containerd[1533]: 2025-11-23 22:56:25.455 [INFO][4359] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:25.937597 containerd[1533]: 2025-11-23 22:56:25.513 [INFO][4359] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0 coredns-674b8bbfcf- kube-system e8685cb9-790c-467e-8fb6-e40f7f6bef3f 823 0 2025-11-23 22:55:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad coredns-674b8bbfcf-mnrvc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali683d9fe4958 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-" Nov 23 22:56:25.937597 containerd[1533]: 2025-11-23 22:56:25.513 [INFO][4359] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.937597 containerd[1533]: 2025-11-23 22:56:25.694 [INFO][4422] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" HandleID="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Workload="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.695 [INFO][4422] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" HandleID="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Workload="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000418450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-2-3-c3120372ad", "pod":"coredns-674b8bbfcf-mnrvc", "timestamp":"2025-11-23 22:56:25.694979842 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.695 [INFO][4422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.754 [INFO][4422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.755 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.789 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.801 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.818 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.822 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938425 containerd[1533]: 2025-11-23 22:56:25.828 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.828 [INFO][4422] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.840 [INFO][4422] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.857 [INFO][4422] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.873 [INFO][4422] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.134/26] block=192.168.107.128/26 handle="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.873 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.134/26] handle="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.873 [INFO][4422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:25.938755 containerd[1533]: 2025-11-23 22:56:25.874 [INFO][4422] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.134/26] IPv6=[] ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" HandleID="k8s-pod-network.8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Workload="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.938887 containerd[1533]: 2025-11-23 22:56:25.879 [INFO][4359] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8685cb9-790c-467e-8fb6-e40f7f6bef3f", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"coredns-674b8bbfcf-mnrvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali683d9fe4958", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:25.938887 containerd[1533]: 2025-11-23 22:56:25.879 [INFO][4359] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.134/32] ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.938887 containerd[1533]: 2025-11-23 22:56:25.879 [INFO][4359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali683d9fe4958 ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.938887 containerd[1533]: 2025-11-23 22:56:25.885 [INFO][4359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.938887 containerd[1533]: 2025-11-23 22:56:25.890 [INFO][4359] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e8685cb9-790c-467e-8fb6-e40f7f6bef3f", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd", Pod:"coredns-674b8bbfcf-mnrvc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali683d9fe4958", MAC:"76:95:05:06:df:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:25.938887 containerd[1533]: 2025-11-23 22:56:25.931 [INFO][4359] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-mnrvc" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--mnrvc-eth0" Nov 23 22:56:25.953952 systemd[1]: Started cri-containerd-7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262.scope - libcontainer container 7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262. Nov 23 22:56:25.958473 sshd[4351]: Invalid user admin from 185.156.73.233 port 54092 Nov 23 22:56:26.007317 containerd[1533]: time="2025-11-23T22:56:26.007101287Z" level=info msg="connecting to shim 8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd" address="unix:///run/containerd/s/c2b22be9ab186df951c73ff1f59b0804295525fba31f48f79ae14716a9a4882d" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:26.010701 sshd[4351]: Connection closed by invalid user admin 185.156.73.233 port 54092 [preauth] Nov 23 22:56:26.016937 systemd[1]: sshd@7-91.98.91.202:22-185.156.73.233:54092.service: Deactivated successfully. Nov 23 22:56:26.045122 systemd-networkd[1417]: cali1c452429668: Link UP Nov 23 22:56:26.045954 systemd-networkd[1417]: cali1c452429668: Gained carrier Nov 23 22:56:26.112137 systemd[1]: Started cri-containerd-8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd.scope - libcontainer container 8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd. Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.476 [INFO][4376] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.511 [INFO][4376] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0 csi-node-driver- calico-system 21480ae4-8b64-4bd3-93f8-a08b2cf68bf0 727 0 2025-11-23 22:56:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad csi-node-driver-h75fx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1c452429668 [] [] }} ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.511 [INFO][4376] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.706 [INFO][4427] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" HandleID="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Workload="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.706 [INFO][4427] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" HandleID="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Workload="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-3-c3120372ad", "pod":"csi-node-driver-h75fx", "timestamp":"2025-11-23 22:56:25.706268891 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.706 [INFO][4427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.874 [INFO][4427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.874 [INFO][4427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.925 [INFO][4427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.941 [INFO][4427] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.971 [INFO][4427] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.976 [INFO][4427] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.981 [INFO][4427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.982 [INFO][4427] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.989 [INFO][4427] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64 Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:25.997 [INFO][4427] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:26.016 [INFO][4427] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.135/26] block=192.168.107.128/26 handle="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:26.016 [INFO][4427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.135/26] handle="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:26.016 [INFO][4427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:26.126347 containerd[1533]: 2025-11-23 22:56:26.016 [INFO][4427] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.135/26] IPv6=[] ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" HandleID="k8s-pod-network.32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Workload="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.128524 containerd[1533]: 2025-11-23 22:56:26.028 [INFO][4376] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"csi-node-driver-h75fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c452429668", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:26.128524 containerd[1533]: 2025-11-23 22:56:26.029 [INFO][4376] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.135/32] ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.128524 containerd[1533]: 2025-11-23 22:56:26.029 [INFO][4376] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c452429668 ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.128524 containerd[1533]: 2025-11-23 22:56:26.045 [INFO][4376] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.128524 containerd[1533]: 2025-11-23 22:56:26.047 [INFO][4376] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21480ae4-8b64-4bd3-93f8-a08b2cf68bf0", ResourceVersion:"727", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64", Pod:"csi-node-driver-h75fx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1c452429668", MAC:"0e:33:8b:8b:cc:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:26.128524 containerd[1533]: 2025-11-23 22:56:26.077 [INFO][4376] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" Namespace="calico-system" Pod="csi-node-driver-h75fx" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-csi--node--driver--h75fx-eth0" Nov 23 22:56:26.131270 containerd[1533]: time="2025-11-23T22:56:26.131187173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-k4l4d,Uid:ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"7000df29f23636ad4637023398b5b3d21bc1c14a2786fb07c542b7aedab7d262\"" Nov 23 22:56:26.138737 containerd[1533]: time="2025-11-23T22:56:26.138690524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:56:26.170505 containerd[1533]: time="2025-11-23T22:56:26.170445877Z" level=info msg="connecting to shim 32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64" address="unix:///run/containerd/s/4bce5f98f94d63515068f6cce0b3b580d750b8235f578fd1759ef58e2f29bbac" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:26.229616 systemd-networkd[1417]: calidfa2d59d2e4: Link UP Nov 23 22:56:26.232194 systemd-networkd[1417]: calidfa2d59d2e4: Gained carrier Nov 23 22:56:26.248269 systemd[1]: Started cri-containerd-32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64.scope - libcontainer container 32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64. Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:25.484 [INFO][4356] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:25.526 [INFO][4356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0 coredns-674b8bbfcf- kube-system b21617f8-e4ed-43f4-8cea-63bb326da3a4 818 0 2025-11-23 22:55:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-2-3-c3120372ad coredns-674b8bbfcf-dvrbp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidfa2d59d2e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:25.528 [INFO][4356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:25.715 [INFO][4432] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" HandleID="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Workload="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:25.715 [INFO][4432] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" HandleID="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Workload="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-2-3-c3120372ad", "pod":"coredns-674b8bbfcf-dvrbp", "timestamp":"2025-11-23 22:56:25.715134704 +0000 UTC"}, Hostname:"ci-4459-1-2-3-c3120372ad", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:25.716 [INFO][4432] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.017 [INFO][4432] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.019 [INFO][4432] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-3-c3120372ad' Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.046 [INFO][4432] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.139 [INFO][4432] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.167 [INFO][4432] ipam/ipam.go 511: Trying affinity for 192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.174 [INFO][4432] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.185 [INFO][4432] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.128/26 host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.186 [INFO][4432] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.107.128/26 handle="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.189 [INFO][4432] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7 Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.198 [INFO][4432] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.107.128/26 handle="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.212 [INFO][4432] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.107.136/26] block=192.168.107.128/26 handle="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.212 [INFO][4432] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.136/26] handle="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" host="ci-4459-1-2-3-c3120372ad" Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.212 [INFO][4432] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:26.272128 containerd[1533]: 2025-11-23 22:56:26.212 [INFO][4432] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.107.136/26] IPv6=[] ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" HandleID="k8s-pod-network.d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Workload="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.274193 containerd[1533]: 2025-11-23 22:56:26.220 [INFO][4356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b21617f8-e4ed-43f4-8cea-63bb326da3a4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"", Pod:"coredns-674b8bbfcf-dvrbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfa2d59d2e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:26.274193 containerd[1533]: 2025-11-23 22:56:26.221 [INFO][4356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.136/32] ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.274193 containerd[1533]: 2025-11-23 22:56:26.221 [INFO][4356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidfa2d59d2e4 ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.274193 containerd[1533]: 2025-11-23 22:56:26.237 [INFO][4356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.274193 containerd[1533]: 2025-11-23 22:56:26.237 [INFO][4356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b21617f8-e4ed-43f4-8cea-63bb326da3a4", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-3-c3120372ad", ContainerID:"d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7", Pod:"coredns-674b8bbfcf-dvrbp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidfa2d59d2e4", MAC:"12:d7:39:f4:51:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:26.274193 containerd[1533]: 2025-11-23 22:56:26.262 [INFO][4356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvrbp" WorkloadEndpoint="ci--4459--1--2--3--c3120372ad-k8s-coredns--674b8bbfcf--dvrbp-eth0" Nov 23 22:56:26.286833 containerd[1533]: time="2025-11-23T22:56:26.285458628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mnrvc,Uid:e8685cb9-790c-467e-8fb6-e40f7f6bef3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd\"" Nov 23 22:56:26.300574 containerd[1533]: time="2025-11-23T22:56:26.300156286Z" level=info msg="CreateContainer within sandbox \"8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:56:26.302046 containerd[1533]: time="2025-11-23T22:56:26.301844432Z" level=info msg="connecting to shim d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7" address="unix:///run/containerd/s/8f74a7dc3024de7dbc0ed8c34c967c0f56d2e4607718d208fbd65db59642bf08" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:26.320439 containerd[1533]: time="2025-11-23T22:56:26.320395828Z" level=info msg="Container 228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:26.331224 containerd[1533]: time="2025-11-23T22:56:26.331132147Z" level=info msg="CreateContainer within sandbox \"8266b89ffe421ad74e3a2831883b7ca567c45e973046ed753624f0122fe347fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84\"" Nov 23 22:56:26.337655 containerd[1533]: time="2025-11-23T22:56:26.335901978Z" level=info msg="StartContainer for \"228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84\"" Nov 23 22:56:26.353406 containerd[1533]: time="2025-11-23T22:56:26.353343318Z" level=info msg="connecting to shim 228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84" address="unix:///run/containerd/s/c2b22be9ab186df951c73ff1f59b0804295525fba31f48f79ae14716a9a4882d" protocol=ttrpc version=3 Nov 23 22:56:26.358620 containerd[1533]: time="2025-11-23T22:56:26.358547155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h75fx,Uid:21480ae4-8b64-4bd3-93f8-a08b2cf68bf0,Namespace:calico-system,Attempt:0,} returns sandbox id \"32dc0fc6b23fa3d549712ee85f7fbc58fa04154eeccf8c79b18e45d2a7749f64\"" Nov 23 22:56:26.373088 systemd[1]: Started cri-containerd-d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7.scope - libcontainer container d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7. Nov 23 22:56:26.397536 systemd[1]: Started cri-containerd-228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84.scope - libcontainer container 228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84. Nov 23 22:56:26.465046 containerd[1533]: time="2025-11-23T22:56:26.464973738Z" level=info msg="StartContainer for \"228740a6d274f21617d074fe71f21832dd90a118383c9e2242ea240ed793bb84\" returns successfully" Nov 23 22:56:26.485982 containerd[1533]: time="2025-11-23T22:56:26.485355962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvrbp,Uid:b21617f8-e4ed-43f4-8cea-63bb326da3a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7\"" Nov 23 22:56:26.496645 containerd[1533]: time="2025-11-23T22:56:26.496391646Z" level=info msg="CreateContainer within sandbox \"d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:56:26.500407 containerd[1533]: time="2025-11-23T22:56:26.499997059Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:26.502775 containerd[1533]: time="2025-11-23T22:56:26.502717740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:56:26.503173 containerd[1533]: time="2025-11-23T22:56:26.502966024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:26.503619 kubelet[2757]: E1123 22:56:26.503580 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:26.505786 kubelet[2757]: E1123 22:56:26.505348 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:26.505786 kubelet[2757]: E1123 22:56:26.505736 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8jgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k4l4d_calico-system(ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:26.507222 containerd[1533]: time="2025-11-23T22:56:26.506694879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:56:26.507805 kubelet[2757]: E1123 22:56:26.507604 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:56:26.549143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032286828.mount: Deactivated successfully. Nov 23 22:56:26.552660 containerd[1533]: time="2025-11-23T22:56:26.550570412Z" level=info msg="Container 38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:26.568643 containerd[1533]: time="2025-11-23T22:56:26.568476998Z" level=info msg="CreateContainer within sandbox \"d11ef5672306e0367a07e2ae10710862a820e5ed33ca18e78fa66f359651ceb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93\"" Nov 23 22:56:26.570496 containerd[1533]: time="2025-11-23T22:56:26.569085607Z" level=info msg="StartContainer for \"38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93\"" Nov 23 22:56:26.571314 containerd[1533]: time="2025-11-23T22:56:26.571278960Z" level=info msg="connecting to shim 38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93" address="unix:///run/containerd/s/8f74a7dc3024de7dbc0ed8c34c967c0f56d2e4607718d208fbd65db59642bf08" protocol=ttrpc version=3 Nov 23 22:56:26.600134 systemd[1]: Started cri-containerd-38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93.scope - libcontainer container 38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93. Nov 23 22:56:26.615143 kubelet[2757]: E1123 22:56:26.614891 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:56:26.632853 kubelet[2757]: I1123 22:56:26.631254 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mnrvc" podStartSLOduration=44.631234092 podStartE2EDuration="44.631234092s" podCreationTimestamp="2025-11-23 22:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:56:26.631170131 +0000 UTC m=+51.460124599" watchObservedRunningTime="2025-11-23 22:56:26.631234092 +0000 UTC m=+51.460188520" Nov 23 22:56:26.702932 containerd[1533]: time="2025-11-23T22:56:26.702873837Z" level=info msg="StartContainer for \"38f632548ab0199072bc465be1971407decf3ecca197ad99f1c1b78646c30a93\" returns successfully" Nov 23 22:56:26.840052 containerd[1533]: time="2025-11-23T22:56:26.839987517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:26.843017 containerd[1533]: time="2025-11-23T22:56:26.842940241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:56:26.843723 containerd[1533]: time="2025-11-23T22:56:26.842981602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:56:26.843943 kubelet[2757]: E1123 22:56:26.843446 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:26.843943 kubelet[2757]: E1123 22:56:26.843499 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:26.843943 kubelet[2757]: E1123 22:56:26.843702 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:26.846395 containerd[1533]: time="2025-11-23T22:56:26.846332132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:56:27.198080 containerd[1533]: time="2025-11-23T22:56:27.198033298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:27.199952 containerd[1533]: time="2025-11-23T22:56:27.199833204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:56:27.199952 containerd[1533]: time="2025-11-23T22:56:27.199895405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:56:27.200154 kubelet[2757]: E1123 22:56:27.200064 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:27.200154 kubelet[2757]: E1123 22:56:27.200131 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:27.200377 kubelet[2757]: E1123 22:56:27.200280 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:27.201588 kubelet[2757]: E1123 22:56:27.201514 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:27.383091 systemd-networkd[1417]: calidfa2d59d2e4: Gained IPv6LL Nov 23 22:56:27.447467 systemd-networkd[1417]: cali1c452429668: Gained IPv6LL Nov 23 22:56:27.510814 systemd-networkd[1417]: cali683d9fe4958: Gained IPv6LL Nov 23 22:56:27.628972 kubelet[2757]: E1123 22:56:27.628761 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:56:27.633818 kubelet[2757]: E1123 22:56:27.632735 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:27.697528 kubelet[2757]: I1123 22:56:27.697370 2757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dvrbp" podStartSLOduration=45.697328019 podStartE2EDuration="45.697328019s" podCreationTimestamp="2025-11-23 22:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:56:27.695982399 +0000 UTC m=+52.524936827" watchObservedRunningTime="2025-11-23 22:56:27.697328019 +0000 UTC m=+52.526282487" Nov 23 22:56:27.830958 systemd-networkd[1417]: cali3a73919f2c5: Gained IPv6LL Nov 23 22:56:29.060719 kubelet[2757]: I1123 22:56:29.060676 2757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 23 22:56:29.813154 systemd-networkd[1417]: vxlan.calico: Link UP Nov 23 22:56:29.813164 systemd-networkd[1417]: vxlan.calico: Gained carrier Nov 23 22:56:31.094928 systemd-networkd[1417]: vxlan.calico: Gained IPv6LL Nov 23 22:56:33.341036 containerd[1533]: time="2025-11-23T22:56:33.338733466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:56:33.676047 containerd[1533]: time="2025-11-23T22:56:33.675984518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:33.679000 containerd[1533]: time="2025-11-23T22:56:33.678118427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:56:33.679000 containerd[1533]: time="2025-11-23T22:56:33.678145588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:56:33.679316 kubelet[2757]: E1123 22:56:33.679119 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:33.679316 kubelet[2757]: E1123 22:56:33.679167 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:33.680047 kubelet[2757]: E1123 22:56:33.679334 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6b66b90b128846799f19a7f06b34548e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:33.682854 containerd[1533]: time="2025-11-23T22:56:33.682817533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:56:34.034585 containerd[1533]: time="2025-11-23T22:56:34.033834775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:34.036038 containerd[1533]: time="2025-11-23T22:56:34.035943724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:56:34.036240 containerd[1533]: time="2025-11-23T22:56:34.036102287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:34.036619 kubelet[2757]: E1123 22:56:34.036521 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:34.036857 kubelet[2757]: E1123 22:56:34.036825 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:34.037643 kubelet[2757]: E1123 22:56:34.037430 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:34.039334 kubelet[2757]: E1123 22:56:34.039268 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:56:37.326670 containerd[1533]: time="2025-11-23T22:56:37.326011694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:56:37.651123 containerd[1533]: time="2025-11-23T22:56:37.651043203Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:37.654169 containerd[1533]: time="2025-11-23T22:56:37.654010643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:56:37.655013 containerd[1533]: time="2025-11-23T22:56:37.654333248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:37.655092 kubelet[2757]: E1123 22:56:37.654697 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:37.655092 kubelet[2757]: E1123 22:56:37.654792 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:37.655092 kubelet[2757]: E1123 22:56:37.655025 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn4px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-755bdf67f9-xqvvt_calico-system(22998d4f-4bc5-4628-a75e-c9b585fec59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:37.655976 containerd[1533]: time="2025-11-23T22:56:37.655433783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:37.656479 kubelet[2757]: E1123 22:56:37.656380 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:56:37.995487 containerd[1533]: time="2025-11-23T22:56:37.995037810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:37.997395 containerd[1533]: time="2025-11-23T22:56:37.997213840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:37.997395 containerd[1533]: time="2025-11-23T22:56:37.997363242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:37.997753 kubelet[2757]: E1123 22:56:37.997609 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:37.997862 kubelet[2757]: E1123 22:56:37.997743 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:37.998516 kubelet[2757]: E1123 22:56:37.998053 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tql6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:37.999369 kubelet[2757]: E1123 22:56:37.999299 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:56:38.324497 containerd[1533]: time="2025-11-23T22:56:38.324354707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:38.670965 containerd[1533]: time="2025-11-23T22:56:38.670732994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:38.672482 containerd[1533]: time="2025-11-23T22:56:38.672322855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:38.672482 containerd[1533]: time="2025-11-23T22:56:38.672327295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:38.672686 kubelet[2757]: E1123 22:56:38.672598 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:38.672686 kubelet[2757]: E1123 22:56:38.672681 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:38.673026 kubelet[2757]: E1123 22:56:38.672816 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jzx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-g8k2q_calico-apiserver(f2f2beaa-e94b-428d-976f-479df6d0fa8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:38.674522 kubelet[2757]: E1123 22:56:38.674428 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:56:39.332106 containerd[1533]: time="2025-11-23T22:56:39.332007672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:56:39.664579 containerd[1533]: time="2025-11-23T22:56:39.664495501Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:39.666141 containerd[1533]: time="2025-11-23T22:56:39.666069922Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:56:39.666535 containerd[1533]: time="2025-11-23T22:56:39.666204844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:56:39.666603 kubelet[2757]: E1123 22:56:39.666535 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:39.666710 kubelet[2757]: E1123 22:56:39.666603 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:39.667211 kubelet[2757]: E1123 22:56:39.666916 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:39.667399 containerd[1533]: time="2025-11-23T22:56:39.667334219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:56:39.998573 containerd[1533]: time="2025-11-23T22:56:39.998141386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:40.003853 containerd[1533]: time="2025-11-23T22:56:40.003736621Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:56:40.004036 containerd[1533]: time="2025-11-23T22:56:40.003781701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:40.004101 kubelet[2757]: E1123 22:56:40.004055 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:40.004533 kubelet[2757]: E1123 22:56:40.004119 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:40.004533 kubelet[2757]: E1123 22:56:40.004379 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8jgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k4l4d_calico-system(ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:40.005288 containerd[1533]: time="2025-11-23T22:56:40.005239441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:56:40.005753 kubelet[2757]: E1123 22:56:40.005697 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:56:40.337494 containerd[1533]: time="2025-11-23T22:56:40.337196194Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:40.339088 containerd[1533]: time="2025-11-23T22:56:40.339004578Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:56:40.339288 containerd[1533]: time="2025-11-23T22:56:40.339013458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:56:40.339652 kubelet[2757]: E1123 22:56:40.339533 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:40.339906 kubelet[2757]: E1123 22:56:40.339714 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:40.340241 kubelet[2757]: E1123 22:56:40.340139 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:40.341445 kubelet[2757]: E1123 22:56:40.341388 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:46.327199 kubelet[2757]: E1123 22:56:46.327116 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:56:48.323772 kubelet[2757]: E1123 22:56:48.323689 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:56:48.582019 kernel: hrtimer: interrupt took 1645941 ns Nov 23 22:56:51.326425 kubelet[2757]: E1123 22:56:51.326358 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:56:52.331400 kubelet[2757]: E1123 22:56:52.329981 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:56:53.328265 kubelet[2757]: E1123 22:56:53.328201 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:56:55.327665 kubelet[2757]: E1123 22:56:55.327087 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:56:59.328347 containerd[1533]: time="2025-11-23T22:56:59.328104761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:56:59.672409 containerd[1533]: time="2025-11-23T22:56:59.672345773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:59.673984 containerd[1533]: time="2025-11-23T22:56:59.673899512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:56:59.674179 containerd[1533]: time="2025-11-23T22:56:59.674038274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:56:59.674885 kubelet[2757]: E1123 22:56:59.674783 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:59.674885 kubelet[2757]: E1123 22:56:59.674853 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:59.676119 kubelet[2757]: E1123 22:56:59.676051 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6b66b90b128846799f19a7f06b34548e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:59.681772 containerd[1533]: time="2025-11-23T22:56:59.680959517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:57:00.004054 containerd[1533]: time="2025-11-23T22:57:00.003856671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:00.008353 containerd[1533]: time="2025-11-23T22:57:00.008193403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:57:00.008353 containerd[1533]: time="2025-11-23T22:57:00.008271924Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:00.009911 kubelet[2757]: E1123 22:57:00.009837 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:57:00.010417 kubelet[2757]: E1123 22:57:00.010098 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:57:00.012848 kubelet[2757]: E1123 22:57:00.012763 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:00.014284 kubelet[2757]: E1123 22:57:00.014218 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:57:00.328951 containerd[1533]: time="2025-11-23T22:57:00.328755954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:00.666015 containerd[1533]: time="2025-11-23T22:57:00.665963425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:00.667417 containerd[1533]: time="2025-11-23T22:57:00.667360402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:00.667559 containerd[1533]: time="2025-11-23T22:57:00.667462203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:00.667806 kubelet[2757]: E1123 22:57:00.667756 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:00.667878 kubelet[2757]: E1123 22:57:00.667817 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:00.668011 kubelet[2757]: E1123 22:57:00.667961 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tql6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:00.669579 kubelet[2757]: E1123 22:57:00.669530 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:57:04.327670 containerd[1533]: time="2025-11-23T22:57:04.327456354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:04.642802 containerd[1533]: time="2025-11-23T22:57:04.642678425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:04.644660 containerd[1533]: time="2025-11-23T22:57:04.644292485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:04.644660 containerd[1533]: time="2025-11-23T22:57:04.644369125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:04.645806 kubelet[2757]: E1123 22:57:04.644598 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:04.645806 kubelet[2757]: E1123 22:57:04.644724 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:04.645806 kubelet[2757]: E1123 22:57:04.645244 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jzx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-g8k2q_calico-apiserver(f2f2beaa-e94b-428d-976f-479df6d0fa8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:04.646838 containerd[1533]: time="2025-11-23T22:57:04.646367549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:57:04.647043 kubelet[2757]: E1123 22:57:04.646714 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:57:04.979456 containerd[1533]: time="2025-11-23T22:57:04.979207311Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:04.981030 containerd[1533]: time="2025-11-23T22:57:04.980938731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:57:04.981779 containerd[1533]: time="2025-11-23T22:57:04.981207495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:04.981857 kubelet[2757]: E1123 22:57:04.981817 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:04.981928 kubelet[2757]: E1123 22:57:04.981874 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:04.982821 kubelet[2757]: E1123 22:57:04.982018 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn4px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-755bdf67f9-xqvvt_calico-system(22998d4f-4bc5-4628-a75e-c9b585fec59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:04.983222 kubelet[2757]: E1123 22:57:04.983186 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:57:06.325639 containerd[1533]: time="2025-11-23T22:57:06.325577910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:57:06.662673 containerd[1533]: time="2025-11-23T22:57:06.661834406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:06.664284 containerd[1533]: time="2025-11-23T22:57:06.664224634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:57:06.664442 containerd[1533]: time="2025-11-23T22:57:06.664265035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:57:06.664668 kubelet[2757]: E1123 22:57:06.664566 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:57:06.664965 kubelet[2757]: E1123 22:57:06.664683 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:57:06.664965 kubelet[2757]: E1123 22:57:06.664822 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:06.667443 containerd[1533]: time="2025-11-23T22:57:06.667401952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:57:06.991789 containerd[1533]: time="2025-11-23T22:57:06.991527344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:06.993304 containerd[1533]: time="2025-11-23T22:57:06.993236885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:57:06.993481 containerd[1533]: time="2025-11-23T22:57:06.993351886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:57:06.993603 kubelet[2757]: E1123 22:57:06.993518 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:57:06.993603 kubelet[2757]: E1123 22:57:06.993574 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:57:06.994217 kubelet[2757]: E1123 22:57:06.994155 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:06.995326 kubelet[2757]: E1123 22:57:06.995267 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:57:10.327230 containerd[1533]: time="2025-11-23T22:57:10.327176680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:57:10.666674 containerd[1533]: time="2025-11-23T22:57:10.666475123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:10.668660 containerd[1533]: time="2025-11-23T22:57:10.668454746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:57:10.668660 containerd[1533]: time="2025-11-23T22:57:10.668592348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:10.668850 kubelet[2757]: E1123 22:57:10.668769 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:57:10.668850 kubelet[2757]: E1123 22:57:10.668826 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:57:10.669221 kubelet[2757]: E1123 22:57:10.668968 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8jgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k4l4d_calico-system(ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:10.670620 kubelet[2757]: E1123 22:57:10.670518 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:57:11.328610 kubelet[2757]: E1123 22:57:11.328364 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:57:15.328652 kubelet[2757]: E1123 22:57:15.325823 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:57:16.325775 kubelet[2757]: E1123 22:57:16.325548 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:57:19.328957 kubelet[2757]: E1123 22:57:19.328890 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:57:21.331553 kubelet[2757]: E1123 22:57:21.331494 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:57:22.326269 kubelet[2757]: E1123 22:57:22.326035 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:57:23.325610 kubelet[2757]: E1123 22:57:23.325261 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:57:27.325302 kubelet[2757]: E1123 22:57:27.325182 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:57:28.327348 kubelet[2757]: E1123 22:57:28.327236 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:57:30.326882 kubelet[2757]: E1123 22:57:30.326483 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:57:32.328722 kubelet[2757]: E1123 22:57:32.328672 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:57:34.324003 kubelet[2757]: E1123 22:57:34.323938 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:57:35.331287 kubelet[2757]: E1123 22:57:35.331229 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:57:39.340249 kubelet[2757]: E1123 22:57:39.338819 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:57:42.324413 kubelet[2757]: E1123 22:57:42.324345 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:57:43.326926 kubelet[2757]: E1123 22:57:43.326841 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:57:43.327890 containerd[1533]: time="2025-11-23T22:57:43.326610036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:43.678726 containerd[1533]: time="2025-11-23T22:57:43.678081262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:43.679931 containerd[1533]: time="2025-11-23T22:57:43.679885442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:43.680151 containerd[1533]: time="2025-11-23T22:57:43.679940203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:43.680710 kubelet[2757]: E1123 22:57:43.680294 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:43.680909 kubelet[2757]: E1123 22:57:43.680864 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:43.681219 kubelet[2757]: E1123 22:57:43.681162 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tql6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:43.682640 kubelet[2757]: E1123 22:57:43.682399 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:57:46.325349 containerd[1533]: time="2025-11-23T22:57:46.325188816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:57:46.669659 containerd[1533]: time="2025-11-23T22:57:46.669442631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:46.671400 containerd[1533]: time="2025-11-23T22:57:46.671195850Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:57:46.671400 containerd[1533]: time="2025-11-23T22:57:46.671313092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:57:46.671583 kubelet[2757]: E1123 22:57:46.671470 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:57:46.671583 kubelet[2757]: E1123 22:57:46.671561 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:57:46.672785 kubelet[2757]: E1123 22:57:46.671690 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6b66b90b128846799f19a7f06b34548e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:46.674707 containerd[1533]: time="2025-11-23T22:57:46.674672648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:57:47.010525 containerd[1533]: time="2025-11-23T22:57:47.010385170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:47.013599 containerd[1533]: time="2025-11-23T22:57:47.013530764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:57:47.013599 containerd[1533]: time="2025-11-23T22:57:47.013556725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:47.014365 kubelet[2757]: E1123 22:57:47.013779 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:57:47.014365 kubelet[2757]: E1123 22:57:47.013827 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:57:47.014365 kubelet[2757]: E1123 22:57:47.013954 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:47.015280 kubelet[2757]: E1123 22:57:47.015226 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:57:49.327640 kubelet[2757]: E1123 22:57:49.327575 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:57:50.326181 containerd[1533]: time="2025-11-23T22:57:50.326121698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:50.662880 containerd[1533]: time="2025-11-23T22:57:50.662823137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:50.665671 containerd[1533]: time="2025-11-23T22:57:50.665579567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:50.665888 containerd[1533]: time="2025-11-23T22:57:50.665609608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:50.666289 kubelet[2757]: E1123 22:57:50.666232 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:50.666659 kubelet[2757]: E1123 22:57:50.666303 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:50.666659 kubelet[2757]: E1123 22:57:50.666495 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jzx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-g8k2q_calico-apiserver(f2f2beaa-e94b-428d-976f-479df6d0fa8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:50.667846 kubelet[2757]: E1123 22:57:50.667694 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:57:54.324901 kubelet[2757]: E1123 22:57:54.324831 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:57:54.328217 containerd[1533]: time="2025-11-23T22:57:54.328158842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:57:54.672094 containerd[1533]: time="2025-11-23T22:57:54.672052547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:54.674024 containerd[1533]: time="2025-11-23T22:57:54.673967288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:57:54.674231 containerd[1533]: time="2025-11-23T22:57:54.673977048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:57:54.675308 kubelet[2757]: E1123 22:57:54.675239 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:57:54.675581 kubelet[2757]: E1123 22:57:54.675291 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:57:54.676763 kubelet[2757]: E1123 22:57:54.676690 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:54.679263 containerd[1533]: time="2025-11-23T22:57:54.679221345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:57:55.011057 containerd[1533]: time="2025-11-23T22:57:55.010904678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:55.012053 containerd[1533]: time="2025-11-23T22:57:55.011991610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:57:55.012167 containerd[1533]: time="2025-11-23T22:57:55.012099371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:57:55.012708 kubelet[2757]: E1123 22:57:55.012237 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:57:55.012708 kubelet[2757]: E1123 22:57:55.012289 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:57:55.012708 kubelet[2757]: E1123 22:57:55.012410 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:55.013844 kubelet[2757]: E1123 22:57:55.013785 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:57:55.325749 containerd[1533]: time="2025-11-23T22:57:55.325586623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:57:55.653783 containerd[1533]: time="2025-11-23T22:57:55.653715714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:55.655524 containerd[1533]: time="2025-11-23T22:57:55.655443293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:57:55.655704 containerd[1533]: time="2025-11-23T22:57:55.655453093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:55.655919 kubelet[2757]: E1123 22:57:55.655850 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:55.656905 kubelet[2757]: E1123 22:57:55.656023 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:55.657212 kubelet[2757]: E1123 22:57:55.657031 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nn4px,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-755bdf67f9-xqvvt_calico-system(22998d4f-4bc5-4628-a75e-c9b585fec59a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:55.658762 kubelet[2757]: E1123 22:57:55.658717 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:57:57.326889 kubelet[2757]: E1123 22:57:57.326817 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:58:02.326530 containerd[1533]: time="2025-11-23T22:58:02.326463243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:58:02.649592 containerd[1533]: time="2025-11-23T22:58:02.649063618Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:02.650758 containerd[1533]: time="2025-11-23T22:58:02.650709156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:58:02.651794 containerd[1533]: time="2025-11-23T22:58:02.651665046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:58:02.652094 kubelet[2757]: E1123 22:58:02.652040 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:58:02.652491 kubelet[2757]: E1123 22:58:02.652104 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:58:02.652491 kubelet[2757]: E1123 22:58:02.652252 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8jgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-k4l4d_calico-system(ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:02.653926 kubelet[2757]: E1123 22:58:02.653864 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:58:04.325293 kubelet[2757]: E1123 22:58:04.325247 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:58:05.329947 kubelet[2757]: E1123 22:58:05.329876 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:58:06.325042 kubelet[2757]: E1123 22:58:06.324881 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:58:09.332050 kubelet[2757]: E1123 22:58:09.331977 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:58:12.324499 kubelet[2757]: E1123 22:58:12.324442 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:58:12.683792 systemd[1]: Started sshd@8-91.98.91.202:22-139.178.89.65:47612.service - OpenSSH per-connection server daemon (139.178.89.65:47612). Nov 23 22:58:13.685937 sshd[5100]: Accepted publickey for core from 139.178.89.65 port 47612 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:13.688240 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:13.696384 systemd-logind[1501]: New session 8 of user core. Nov 23 22:58:13.703738 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 22:58:14.486614 sshd[5105]: Connection closed by 139.178.89.65 port 47612 Nov 23 22:58:14.487716 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:14.495308 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. Nov 23 22:58:14.496672 systemd[1]: sshd@8-91.98.91.202:22-139.178.89.65:47612.service: Deactivated successfully. Nov 23 22:58:14.500999 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 22:58:14.503619 systemd-logind[1501]: Removed session 8. Nov 23 22:58:17.328982 kubelet[2757]: E1123 22:58:17.328274 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:58:17.328982 kubelet[2757]: E1123 22:58:17.328290 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:58:18.324810 kubelet[2757]: E1123 22:58:18.324758 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:58:19.654414 systemd[1]: Started sshd@9-91.98.91.202:22-139.178.89.65:34892.service - OpenSSH per-connection server daemon (139.178.89.65:34892). Nov 23 22:58:20.633510 sshd[5142]: Accepted publickey for core from 139.178.89.65 port 34892 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:20.635283 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:20.644188 systemd-logind[1501]: New session 9 of user core. Nov 23 22:58:20.649314 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 22:58:21.325666 kubelet[2757]: E1123 22:58:21.325170 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:58:21.381095 sshd[5145]: Connection closed by 139.178.89.65 port 34892 Nov 23 22:58:21.382294 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:21.391379 systemd[1]: sshd@9-91.98.91.202:22-139.178.89.65:34892.service: Deactivated successfully. Nov 23 22:58:21.398170 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 22:58:21.400270 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. Nov 23 22:58:21.404447 systemd-logind[1501]: Removed session 9. Nov 23 22:58:22.326983 kubelet[2757]: E1123 22:58:22.326880 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:58:23.329849 kubelet[2757]: E1123 22:58:23.329786 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:58:26.554066 systemd[1]: Started sshd@10-91.98.91.202:22-139.178.89.65:34902.service - OpenSSH per-connection server daemon (139.178.89.65:34902). Nov 23 22:58:27.537137 sshd[5158]: Accepted publickey for core from 139.178.89.65 port 34902 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:27.541015 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:27.549272 systemd-logind[1501]: New session 10 of user core. Nov 23 22:58:27.560187 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 22:58:28.375484 sshd[5161]: Connection closed by 139.178.89.65 port 34902 Nov 23 22:58:28.375381 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:28.385602 systemd[1]: sshd@10-91.98.91.202:22-139.178.89.65:34902.service: Deactivated successfully. Nov 23 22:58:28.390221 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 22:58:28.392357 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. Nov 23 22:58:28.397141 systemd-logind[1501]: Removed session 10. Nov 23 22:58:28.548160 systemd[1]: Started sshd@11-91.98.91.202:22-139.178.89.65:34918.service - OpenSSH per-connection server daemon (139.178.89.65:34918). Nov 23 22:58:29.548641 sshd[5179]: Accepted publickey for core from 139.178.89.65 port 34918 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:29.550556 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:29.559582 systemd-logind[1501]: New session 11 of user core. Nov 23 22:58:29.566038 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 22:58:30.390388 sshd[5182]: Connection closed by 139.178.89.65 port 34918 Nov 23 22:58:30.392840 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:30.400134 systemd[1]: sshd@11-91.98.91.202:22-139.178.89.65:34918.service: Deactivated successfully. Nov 23 22:58:30.405327 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 22:58:30.407981 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. Nov 23 22:58:30.411231 systemd-logind[1501]: Removed session 11. Nov 23 22:58:30.558485 systemd[1]: Started sshd@12-91.98.91.202:22-139.178.89.65:47292.service - OpenSSH per-connection server daemon (139.178.89.65:47292). Nov 23 22:58:31.330654 kubelet[2757]: E1123 22:58:31.329829 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:58:31.330654 kubelet[2757]: E1123 22:58:31.330521 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:58:31.330654 kubelet[2757]: E1123 22:58:31.330596 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:58:31.550936 sshd[5191]: Accepted publickey for core from 139.178.89.65 port 47292 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:31.552483 sshd-session[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:31.560859 systemd-logind[1501]: New session 12 of user core. Nov 23 22:58:31.565972 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 22:58:32.316258 sshd[5194]: Connection closed by 139.178.89.65 port 47292 Nov 23 22:58:32.317704 sshd-session[5191]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:32.324511 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. Nov 23 22:58:32.325238 systemd[1]: sshd@12-91.98.91.202:22-139.178.89.65:47292.service: Deactivated successfully. Nov 23 22:58:32.328304 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 22:58:32.333300 systemd-logind[1501]: Removed session 12. Nov 23 22:58:35.329436 kubelet[2757]: E1123 22:58:35.329379 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:58:35.333053 kubelet[2757]: E1123 22:58:35.332985 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:58:37.330850 kubelet[2757]: E1123 22:58:37.329684 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:58:37.484503 systemd[1]: Started sshd@13-91.98.91.202:22-139.178.89.65:47304.service - OpenSSH per-connection server daemon (139.178.89.65:47304). Nov 23 22:58:38.467451 sshd[5208]: Accepted publickey for core from 139.178.89.65 port 47304 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:38.470992 sshd-session[5208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:38.480972 systemd-logind[1501]: New session 13 of user core. Nov 23 22:58:38.484834 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 22:58:39.224150 sshd[5211]: Connection closed by 139.178.89.65 port 47304 Nov 23 22:58:39.223215 sshd-session[5208]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:39.228684 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. Nov 23 22:58:39.229145 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 22:58:39.232117 systemd[1]: sshd@13-91.98.91.202:22-139.178.89.65:47304.service: Deactivated successfully. Nov 23 22:58:39.236215 systemd-logind[1501]: Removed session 13. Nov 23 22:58:39.390462 systemd[1]: Started sshd@14-91.98.91.202:22-139.178.89.65:47316.service - OpenSSH per-connection server daemon (139.178.89.65:47316). Nov 23 22:58:40.367209 sshd[5223]: Accepted publickey for core from 139.178.89.65 port 47316 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:40.369584 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:40.375833 systemd-logind[1501]: New session 14 of user core. Nov 23 22:58:40.384418 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 22:58:41.260252 sshd[5226]: Connection closed by 139.178.89.65 port 47316 Nov 23 22:58:41.261430 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:41.267992 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 22:58:41.268318 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. Nov 23 22:58:41.269601 systemd[1]: sshd@14-91.98.91.202:22-139.178.89.65:47316.service: Deactivated successfully. Nov 23 22:58:41.279090 systemd-logind[1501]: Removed session 14. Nov 23 22:58:41.430317 systemd[1]: Started sshd@15-91.98.91.202:22-139.178.89.65:36184.service - OpenSSH per-connection server daemon (139.178.89.65:36184). Nov 23 22:58:42.420248 sshd[5236]: Accepted publickey for core from 139.178.89.65 port 36184 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:42.422137 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:42.429752 systemd-logind[1501]: New session 15 of user core. Nov 23 22:58:42.433817 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 22:58:43.325411 kubelet[2757]: E1123 22:58:43.325032 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:58:43.983352 sshd[5239]: Connection closed by 139.178.89.65 port 36184 Nov 23 22:58:43.984927 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:43.991382 systemd[1]: sshd@15-91.98.91.202:22-139.178.89.65:36184.service: Deactivated successfully. Nov 23 22:58:43.994494 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 22:58:43.996976 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. Nov 23 22:58:43.999272 systemd-logind[1501]: Removed session 15. Nov 23 22:58:44.149919 systemd[1]: Started sshd@16-91.98.91.202:22-139.178.89.65:36188.service - OpenSSH per-connection server daemon (139.178.89.65:36188). Nov 23 22:58:44.324392 kubelet[2757]: E1123 22:58:44.323976 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:58:45.120203 sshd[5265]: Accepted publickey for core from 139.178.89.65 port 36188 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:45.122858 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:45.132159 systemd-logind[1501]: New session 16 of user core. Nov 23 22:58:45.135992 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 22:58:46.021440 sshd[5268]: Connection closed by 139.178.89.65 port 36188 Nov 23 22:58:46.022440 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:46.028799 systemd[1]: sshd@16-91.98.91.202:22-139.178.89.65:36188.service: Deactivated successfully. Nov 23 22:58:46.031559 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 22:58:46.032977 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. Nov 23 22:58:46.036178 systemd-logind[1501]: Removed session 16. Nov 23 22:58:46.195446 systemd[1]: Started sshd@17-91.98.91.202:22-139.178.89.65:36202.service - OpenSSH per-connection server daemon (139.178.89.65:36202). Nov 23 22:58:46.326377 kubelet[2757]: E1123 22:58:46.325404 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:58:47.198033 sshd[5279]: Accepted publickey for core from 139.178.89.65 port 36202 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:47.199566 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:47.207046 systemd-logind[1501]: New session 17 of user core. Nov 23 22:58:47.211005 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 22:58:47.326304 kubelet[2757]: E1123 22:58:47.326256 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:58:47.985365 sshd[5282]: Connection closed by 139.178.89.65 port 36202 Nov 23 22:58:47.985823 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:47.994410 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. Nov 23 22:58:47.996376 systemd[1]: sshd@17-91.98.91.202:22-139.178.89.65:36202.service: Deactivated successfully. Nov 23 22:58:48.004058 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 22:58:48.010384 systemd-logind[1501]: Removed session 17. Nov 23 22:58:48.325339 kubelet[2757]: E1123 22:58:48.325207 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:58:50.326185 kubelet[2757]: E1123 22:58:50.325794 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:58:53.163925 systemd[1]: Started sshd@18-91.98.91.202:22-139.178.89.65:52204.service - OpenSSH per-connection server daemon (139.178.89.65:52204). Nov 23 22:58:54.148286 sshd[5321]: Accepted publickey for core from 139.178.89.65 port 52204 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:54.151439 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:54.162226 systemd-logind[1501]: New session 18 of user core. Nov 23 22:58:54.167861 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 22:58:54.324773 kubelet[2757]: E1123 22:58:54.324665 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:58:54.935019 sshd[5324]: Connection closed by 139.178.89.65 port 52204 Nov 23 22:58:54.936732 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:54.942743 systemd[1]: sshd@18-91.98.91.202:22-139.178.89.65:52204.service: Deactivated successfully. Nov 23 22:58:54.945893 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 22:58:54.948803 systemd-logind[1501]: Session 18 logged out. Waiting for processes to exit. Nov 23 22:58:54.951051 systemd-logind[1501]: Removed session 18. Nov 23 22:58:58.324105 kubelet[2757]: E1123 22:58:58.324051 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:58:58.326005 kubelet[2757]: E1123 22:58:58.325899 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:59:00.134991 systemd[1]: Started sshd@19-91.98.91.202:22-139.178.89.65:53666.service - OpenSSH per-connection server daemon (139.178.89.65:53666). Nov 23 22:59:00.324599 kubelet[2757]: E1123 22:59:00.324236 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:59:01.211493 sshd[5336]: Accepted publickey for core from 139.178.89.65 port 53666 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:59:01.213510 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:59:01.221260 systemd-logind[1501]: New session 19 of user core. Nov 23 22:59:01.226842 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 22:59:01.326228 kubelet[2757]: E1123 22:59:01.326048 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:59:02.041136 sshd[5339]: Connection closed by 139.178.89.65 port 53666 Nov 23 22:59:02.041732 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Nov 23 22:59:02.047988 systemd-logind[1501]: Session 19 logged out. Waiting for processes to exit. Nov 23 22:59:02.048326 systemd[1]: sshd@19-91.98.91.202:22-139.178.89.65:53666.service: Deactivated successfully. Nov 23 22:59:02.052290 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 22:59:02.058453 systemd-logind[1501]: Removed session 19. Nov 23 22:59:04.327120 kubelet[2757]: E1123 22:59:04.327062 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:59:09.324956 kubelet[2757]: E1123 22:59:09.324847 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-g8k2q" podUID="f2f2beaa-e94b-428d-976f-479df6d0fa8f" Nov 23 22:59:09.325904 kubelet[2757]: E1123 22:59:09.325832 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-k4l4d" podUID="ab494c3a-4812-4ee2-ad6b-4c8c2c77a5ee" Nov 23 22:59:12.328824 containerd[1533]: time="2025-11-23T22:59:12.328759020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:12.641479 containerd[1533]: time="2025-11-23T22:59:12.641321080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:12.644013 containerd[1533]: time="2025-11-23T22:59:12.643111665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:12.644013 containerd[1533]: time="2025-11-23T22:59:12.643258987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:12.644263 kubelet[2757]: E1123 22:59:12.643437 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:12.644263 kubelet[2757]: E1123 22:59:12.643498 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:12.647297 kubelet[2757]: E1123 22:59:12.647191 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tql6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9564bdf65-pdtkd_calico-apiserver(ae58e09c-3642-4da5-a2ea-675ec846270c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:12.648722 kubelet[2757]: E1123 22:59:12.648668 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9564bdf65-pdtkd" podUID="ae58e09c-3642-4da5-a2ea-675ec846270c" Nov 23 22:59:13.325865 containerd[1533]: time="2025-11-23T22:59:13.325717298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:59:13.670647 containerd[1533]: time="2025-11-23T22:59:13.670525676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:13.673106 containerd[1533]: time="2025-11-23T22:59:13.672941190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:59:13.673106 containerd[1533]: time="2025-11-23T22:59:13.672950990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:59:13.673295 kubelet[2757]: E1123 22:59:13.673224 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:13.673295 kubelet[2757]: E1123 22:59:13.673277 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:13.673729 kubelet[2757]: E1123 22:59:13.673420 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6b66b90b128846799f19a7f06b34548e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:13.675786 containerd[1533]: time="2025-11-23T22:59:13.675728989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:59:14.005434 containerd[1533]: time="2025-11-23T22:59:14.004667265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:14.007332 containerd[1533]: time="2025-11-23T22:59:14.007241821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:59:14.007792 containerd[1533]: time="2025-11-23T22:59:14.007326622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:14.008099 kubelet[2757]: E1123 22:59:14.008035 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:14.008153 kubelet[2757]: E1123 22:59:14.008103 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:14.008403 kubelet[2757]: E1123 22:59:14.008314 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64cc7645c9-8ptpv_calico-system(14be2267-c3d8-4884-b5c4-de72ade3d8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:14.009816 kubelet[2757]: E1123 22:59:14.009739 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64cc7645c9-8ptpv" podUID="14be2267-c3d8-4884-b5c4-de72ade3d8e8" Nov 23 22:59:14.325166 kubelet[2757]: E1123 22:59:14.324978 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-755bdf67f9-xqvvt" podUID="22998d4f-4bc5-4628-a75e-c9b585fec59a" Nov 23 22:59:16.324864 containerd[1533]: time="2025-11-23T22:59:16.324721812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:59:16.657204 containerd[1533]: time="2025-11-23T22:59:16.657141179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:16.659003 containerd[1533]: time="2025-11-23T22:59:16.658940684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:59:16.659267 containerd[1533]: time="2025-11-23T22:59:16.659072926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:59:16.659582 kubelet[2757]: E1123 22:59:16.659447 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:16.659582 kubelet[2757]: E1123 22:59:16.659547 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:59:16.660479 kubelet[2757]: E1123 22:59:16.660389 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:16.662766 containerd[1533]: time="2025-11-23T22:59:16.662731017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:59:16.665954 kubelet[2757]: E1123 22:59:16.665809 2757 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55720->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-9564bdf65-g8k2q.187ac4cb8b93dcb6 calico-apiserver 1736 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-9564bdf65-g8k2q,UID:f2f2beaa-e94b-428d-976f-479df6d0fa8f,APIVersion:v1,ResourceVersion:806,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-1-2-3-c3120372ad,},FirstTimestamp:2025-11-23 22:56:22 +0000 UTC,LastTimestamp:2025-11-23 22:59:09.324762719 +0000 UTC m=+214.153717187,Count:12,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-2-3-c3120372ad,}" Nov 23 22:59:16.819494 systemd[1]: cri-containerd-d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2.scope: Deactivated successfully. Nov 23 22:59:16.820226 systemd[1]: cri-containerd-d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2.scope: Consumed 40.311s CPU time, 102.6M memory peak. Nov 23 22:59:16.824605 containerd[1533]: time="2025-11-23T22:59:16.824451138Z" level=info msg="received container exit event container_id:\"d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2\" id:\"d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2\" pid:3083 exit_status:1 exited_at:{seconds:1763938756 nanos:823281682}" Nov 23 22:59:16.856349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2-rootfs.mount: Deactivated successfully. Nov 23 22:59:16.997261 containerd[1533]: time="2025-11-23T22:59:16.996807807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:16.998580 containerd[1533]: time="2025-11-23T22:59:16.998501910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:59:16.999186 kubelet[2757]: E1123 22:59:16.998886 2757 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:16.999186 kubelet[2757]: E1123 22:59:16.998974 2757 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:59:16.999186 kubelet[2757]: E1123 22:59:16.999118 2757 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2pl8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-h75fx_calico-system(21480ae4-8b64-4bd3-93f8-a08b2cf68bf0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:16.999460 containerd[1533]: time="2025-11-23T22:59:16.998917116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:59:17.000478 kubelet[2757]: E1123 22:59:17.000389 2757 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-h75fx" podUID="21480ae4-8b64-4bd3-93f8-a08b2cf68bf0" Nov 23 22:59:17.155220 kubelet[2757]: I1123 22:59:17.155151 2757 scope.go:117] "RemoveContainer" containerID="d4174aae0cf1cff50aa4e113866133c069032c758c298d4ace282e6773dd80c2" Nov 23 22:59:17.165560 containerd[1533]: time="2025-11-23T22:59:17.165437378Z" level=info msg="CreateContainer within sandbox \"8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 22:59:17.178165 containerd[1533]: time="2025-11-23T22:59:17.178089233Z" level=info msg="Container a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:17.189694 containerd[1533]: time="2025-11-23T22:59:17.189612152Z" level=info msg="CreateContainer within sandbox \"8142498f09264843ff2716fab9310cc0bb0393e40052168f13a997f1888ec510\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2\"" Nov 23 22:59:17.190566 containerd[1533]: time="2025-11-23T22:59:17.190537205Z" level=info msg="StartContainer for \"a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2\"" Nov 23 22:59:17.191628 containerd[1533]: time="2025-11-23T22:59:17.191579819Z" level=info msg="connecting to shim a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2" address="unix:///run/containerd/s/529c1253dcbe09ace2825b04f3305e78f0a69a6632ca02cb0521e112b584c75e" protocol=ttrpc version=3 Nov 23 22:59:17.213138 systemd[1]: cri-containerd-1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba.scope: Deactivated successfully. Nov 23 22:59:17.213451 systemd[1]: cri-containerd-1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba.scope: Consumed 5.394s CPU time, 68.3M memory peak, 3.4M read from disk. Nov 23 22:59:17.221369 containerd[1533]: time="2025-11-23T22:59:17.221321750Z" level=info msg="received container exit event container_id:\"1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba\" id:\"1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba\" pid:2594 exit_status:1 exited_at:{seconds:1763938757 nanos:219976172}" Nov 23 22:59:17.230127 systemd[1]: Started cri-containerd-a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2.scope - libcontainer container a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2. Nov 23 22:59:17.245742 kubelet[2757]: E1123 22:59:17.245681 2757 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:55912->10.0.0.2:2379: read: connection timed out" Nov 23 22:59:17.252864 systemd[1]: cri-containerd-5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd.scope: Deactivated successfully. Nov 23 22:59:17.253211 systemd[1]: cri-containerd-5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd.scope: Consumed 5.437s CPU time, 26.8M memory peak, 3.6M read from disk. Nov 23 22:59:17.262746 containerd[1533]: time="2025-11-23T22:59:17.262684122Z" level=info msg="received container exit event container_id:\"5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd\" id:\"5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd\" pid:2617 exit_status:1 exited_at:{seconds:1763938757 nanos:260312289}" Nov 23 22:59:17.281138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba-rootfs.mount: Deactivated successfully. Nov 23 22:59:17.346046 containerd[1533]: time="2025-11-23T22:59:17.345971113Z" level=info msg="StartContainer for \"a03630c52e545fd8675118e7ba2be548b5ac11f9dbbcfe1e3dd6ebdd84b701c2\" returns successfully" Nov 23 22:59:17.858081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd-rootfs.mount: Deactivated successfully. Nov 23 22:59:18.150604 kubelet[2757]: I1123 22:59:18.150340 2757 scope.go:117] "RemoveContainer" containerID="1c2c950b19912331256ab8b38b7c33daa0d359328739e5d46d9fee3daf1a3cba" Nov 23 22:59:18.153858 containerd[1533]: time="2025-11-23T22:59:18.153805833Z" level=info msg="CreateContainer within sandbox \"789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 22:59:18.158485 kubelet[2757]: I1123 22:59:18.158454 2757 scope.go:117] "RemoveContainer" containerID="5766cfbb0aabfb9ad5070d63cc10943687b6710de4ee659d21ed7869d71757fd" Nov 23 22:59:18.161214 containerd[1533]: time="2025-11-23T22:59:18.160993453Z" level=info msg="CreateContainer within sandbox \"f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 23 22:59:18.172457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4017139283.mount: Deactivated successfully. Nov 23 22:59:18.178133 containerd[1533]: time="2025-11-23T22:59:18.176778550Z" level=info msg="Container 52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:18.181769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount468264607.mount: Deactivated successfully. Nov 23 22:59:18.187211 containerd[1533]: time="2025-11-23T22:59:18.186217640Z" level=info msg="Container f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:18.200058 containerd[1533]: time="2025-11-23T22:59:18.199990510Z" level=info msg="CreateContainer within sandbox \"789b2f4f730ba3f0179afdafb4c399a19c826c700f2b3f14a3da8df7599f66ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f\"" Nov 23 22:59:18.200633 containerd[1533]: time="2025-11-23T22:59:18.200588878Z" level=info msg="StartContainer for \"52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f\"" Nov 23 22:59:18.202738 containerd[1533]: time="2025-11-23T22:59:18.202688187Z" level=info msg="connecting to shim 52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f" address="unix:///run/containerd/s/a1889ba13f566dd61208b1c04a1e06f23db967b92de70d7df8215e3fce00c996" protocol=ttrpc version=3 Nov 23 22:59:18.207195 containerd[1533]: time="2025-11-23T22:59:18.206889445Z" level=info msg="CreateContainer within sandbox \"f5b3812dbe1e006caf90689a1e19e821d2e2d5deec37401b70ce883cc55211f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566\"" Nov 23 22:59:18.208798 containerd[1533]: time="2025-11-23T22:59:18.208754191Z" level=info msg="StartContainer for \"f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566\"" Nov 23 22:59:18.210468 containerd[1533]: time="2025-11-23T22:59:18.210421494Z" level=info msg="connecting to shim f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566" address="unix:///run/containerd/s/5f8041db99278a701057ceca395d8cfd11b6ae4e28635a54e42c9291d9262e8e" protocol=ttrpc version=3 Nov 23 22:59:18.234938 systemd[1]: Started cri-containerd-52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f.scope - libcontainer container 52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f. Nov 23 22:59:18.242983 systemd[1]: Started cri-containerd-f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566.scope - libcontainer container f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566. Nov 23 22:59:18.302069 containerd[1533]: time="2025-11-23T22:59:18.302015397Z" level=info msg="StartContainer for \"52c7f153fb934430c97407e2d13849128e2d5b94a956d94d45820f011655705f\" returns successfully" Nov 23 22:59:18.312736 containerd[1533]: time="2025-11-23T22:59:18.312689224Z" level=info msg="StartContainer for \"f966ee5cbc4ea0403cb7f56ec69e44bb8e8e665a8a9494ac494a135f60d1b566\" returns successfully"