Nov 23 22:54:17.808401 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 23 22:54:17.808431 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Sun Nov 23 20:49:09 -00 2025 Nov 23 22:54:17.808442 kernel: KASLR enabled Nov 23 22:54:17.808447 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Nov 23 22:54:17.808453 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Nov 23 22:54:17.808458 kernel: random: crng init done Nov 23 22:54:17.808465 kernel: secureboot: Secure boot disabled Nov 23 22:54:17.808471 kernel: ACPI: Early table checksum verification disabled Nov 23 22:54:17.808476 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Nov 23 22:54:17.808482 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Nov 23 22:54:17.808490 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808496 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808502 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808508 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808515 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808522 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808528 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808534 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808541 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 23 22:54:17.808547 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Nov 23 22:54:17.808553 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Nov 23 22:54:17.808559 kernel: ACPI: Use ACPI SPCR as default console: No Nov 23 22:54:17.808565 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 22:54:17.808571 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Nov 23 22:54:17.808577 kernel: Zone ranges: Nov 23 22:54:17.808583 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 23 22:54:17.808590 kernel: DMA32 empty Nov 23 22:54:17.808596 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Nov 23 22:54:17.808602 kernel: Device empty Nov 23 22:54:17.808608 kernel: Movable zone start for each node Nov 23 22:54:17.808614 kernel: Early memory node ranges Nov 23 22:54:17.808620 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Nov 23 22:54:17.808626 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Nov 23 22:54:17.808632 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Nov 23 22:54:17.808640 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Nov 23 22:54:17.808647 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Nov 23 22:54:17.808653 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Nov 23 22:54:17.808659 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Nov 23 22:54:17.808666 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Nov 23 22:54:17.808672 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Nov 23 22:54:17.808681 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Nov 23 22:54:17.808688 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Nov 23 22:54:17.808694 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Nov 23 22:54:17.808702 kernel: psci: probing for conduit method from ACPI. Nov 23 22:54:17.808708 kernel: psci: PSCIv1.1 detected in firmware. Nov 23 22:54:17.808715 kernel: psci: Using standard PSCI v0.2 function IDs Nov 23 22:54:17.810867 kernel: psci: Trusted OS migration not required Nov 23 22:54:17.810897 kernel: psci: SMC Calling Convention v1.1 Nov 23 22:54:17.810905 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 23 22:54:17.810912 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 23 22:54:17.810919 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 23 22:54:17.810926 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 23 22:54:17.810934 kernel: Detected PIPT I-cache on CPU0 Nov 23 22:54:17.810940 kernel: CPU features: detected: GIC system register CPU interface Nov 23 22:54:17.810957 kernel: CPU features: detected: Spectre-v4 Nov 23 22:54:17.810964 kernel: CPU features: detected: Spectre-BHB Nov 23 22:54:17.810972 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 23 22:54:17.810979 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 23 22:54:17.810986 kernel: CPU features: detected: ARM erratum 1418040 Nov 23 22:54:17.810993 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 23 22:54:17.811000 kernel: alternatives: applying boot alternatives Nov 23 22:54:17.811009 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:54:17.811018 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 23 22:54:17.811025 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 23 22:54:17.811032 kernel: Fallback order for Node 0: 0 Nov 23 22:54:17.811040 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Nov 23 22:54:17.811047 kernel: Policy zone: Normal Nov 23 22:54:17.811055 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 23 22:54:17.811063 kernel: software IO TLB: area num 2. Nov 23 22:54:17.811071 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Nov 23 22:54:17.811079 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 23 22:54:17.811086 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 23 22:54:17.811096 kernel: rcu: RCU event tracing is enabled. Nov 23 22:54:17.811104 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 23 22:54:17.811111 kernel: Trampoline variant of Tasks RCU enabled. Nov 23 22:54:17.811119 kernel: Tracing variant of Tasks RCU enabled. Nov 23 22:54:17.811127 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 23 22:54:17.811137 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 23 22:54:17.811144 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:54:17.811150 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 23 22:54:17.811157 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 23 22:54:17.811164 kernel: GICv3: 256 SPIs implemented Nov 23 22:54:17.811183 kernel: GICv3: 0 Extended SPIs implemented Nov 23 22:54:17.811189 kernel: Root IRQ handler: gic_handle_irq Nov 23 22:54:17.811196 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 23 22:54:17.811202 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 23 22:54:17.811209 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 23 22:54:17.811216 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 23 22:54:17.811224 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Nov 23 22:54:17.811231 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Nov 23 22:54:17.811238 kernel: GICv3: using LPI property table @0x0000000100120000 Nov 23 22:54:17.811245 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Nov 23 22:54:17.811251 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 23 22:54:17.811257 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:54:17.811264 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 23 22:54:17.811271 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 23 22:54:17.811277 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 23 22:54:17.811284 kernel: Console: colour dummy device 80x25 Nov 23 22:54:17.811291 kernel: ACPI: Core revision 20240827 Nov 23 22:54:17.811299 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 23 22:54:17.811306 kernel: pid_max: default: 32768 minimum: 301 Nov 23 22:54:17.811312 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 23 22:54:17.811319 kernel: landlock: Up and running. Nov 23 22:54:17.811326 kernel: SELinux: Initializing. Nov 23 22:54:17.811333 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:54:17.811339 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 23 22:54:17.811346 kernel: rcu: Hierarchical SRCU implementation. Nov 23 22:54:17.811353 kernel: rcu: Max phase no-delay instances is 400. Nov 23 22:54:17.811361 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 23 22:54:17.811368 kernel: Remapping and enabling EFI services. Nov 23 22:54:17.811375 kernel: smp: Bringing up secondary CPUs ... Nov 23 22:54:17.811382 kernel: Detected PIPT I-cache on CPU1 Nov 23 22:54:17.811389 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 23 22:54:17.811395 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Nov 23 22:54:17.811402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 23 22:54:17.811408 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 23 22:54:17.811415 kernel: smp: Brought up 1 node, 2 CPUs Nov 23 22:54:17.811423 kernel: SMP: Total of 2 processors activated. Nov 23 22:54:17.811435 kernel: CPU: All CPU(s) started at EL1 Nov 23 22:54:17.811442 kernel: CPU features: detected: 32-bit EL0 Support Nov 23 22:54:17.811450 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 23 22:54:17.811457 kernel: CPU features: detected: Common not Private translations Nov 23 22:54:17.812942 kernel: CPU features: detected: CRC32 instructions Nov 23 22:54:17.812951 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 23 22:54:17.812959 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 23 22:54:17.812972 kernel: CPU features: detected: LSE atomic instructions Nov 23 22:54:17.812979 kernel: CPU features: detected: Privileged Access Never Nov 23 22:54:17.812986 kernel: CPU features: detected: RAS Extension Support Nov 23 22:54:17.812993 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 23 22:54:17.813000 kernel: alternatives: applying system-wide alternatives Nov 23 22:54:17.813007 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Nov 23 22:54:17.813016 kernel: Memory: 3858852K/4096000K available (11200K kernel code, 2456K rwdata, 9084K rodata, 39552K init, 1038K bss, 215668K reserved, 16384K cma-reserved) Nov 23 22:54:17.813023 kernel: devtmpfs: initialized Nov 23 22:54:17.813030 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 23 22:54:17.813039 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 23 22:54:17.813046 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 23 22:54:17.813053 kernel: 0 pages in range for non-PLT usage Nov 23 22:54:17.813060 kernel: 508400 pages in range for PLT usage Nov 23 22:54:17.813067 kernel: pinctrl core: initialized pinctrl subsystem Nov 23 22:54:17.813074 kernel: SMBIOS 3.0.0 present. Nov 23 22:54:17.813081 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Nov 23 22:54:17.813088 kernel: DMI: Memory slots populated: 1/1 Nov 23 22:54:17.813097 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 23 22:54:17.813107 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 23 22:54:17.813114 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 23 22:54:17.813121 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 23 22:54:17.813128 kernel: audit: initializing netlink subsys (disabled) Nov 23 22:54:17.813136 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Nov 23 22:54:17.813143 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 23 22:54:17.813150 kernel: cpuidle: using governor menu Nov 23 22:54:17.813156 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 23 22:54:17.813164 kernel: ASID allocator initialised with 32768 entries Nov 23 22:54:17.813209 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 23 22:54:17.813217 kernel: Serial: AMBA PL011 UART driver Nov 23 22:54:17.813225 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 23 22:54:17.813233 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 23 22:54:17.813240 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 23 22:54:17.813247 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 23 22:54:17.813255 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 23 22:54:17.813262 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 23 22:54:17.813269 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 23 22:54:17.813278 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 23 22:54:17.813285 kernel: ACPI: Added _OSI(Module Device) Nov 23 22:54:17.813292 kernel: ACPI: Added _OSI(Processor Device) Nov 23 22:54:17.813299 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 23 22:54:17.813306 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 23 22:54:17.813313 kernel: ACPI: Interpreter enabled Nov 23 22:54:17.813320 kernel: ACPI: Using GIC for interrupt routing Nov 23 22:54:17.813327 kernel: ACPI: MCFG table detected, 1 entries Nov 23 22:54:17.813334 kernel: ACPI: CPU0 has been hot-added Nov 23 22:54:17.813343 kernel: ACPI: CPU1 has been hot-added Nov 23 22:54:17.813350 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 23 22:54:17.813357 kernel: printk: legacy console [ttyAMA0] enabled Nov 23 22:54:17.813364 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 23 22:54:17.813529 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 23 22:54:17.813594 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 23 22:54:17.813655 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 23 22:54:17.813715 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 23 22:54:17.816903 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 23 22:54:17.816922 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 23 22:54:17.816929 kernel: PCI host bridge to bus 0000:00 Nov 23 22:54:17.817000 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 23 22:54:17.817061 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 23 22:54:17.817133 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 23 22:54:17.817206 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 23 22:54:17.817310 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 23 22:54:17.817386 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Nov 23 22:54:17.817448 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Nov 23 22:54:17.817507 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Nov 23 22:54:17.817577 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.817636 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Nov 23 22:54:17.817698 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 22:54:17.817828 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 22:54:17.817907 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Nov 23 22:54:17.817980 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.818041 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Nov 23 22:54:17.818100 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 22:54:17.818159 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 22:54:17.818278 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.818342 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Nov 23 22:54:17.818402 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 22:54:17.818460 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 22:54:17.818522 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Nov 23 22:54:17.818596 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.818657 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Nov 23 22:54:17.818720 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 22:54:17.821316 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 22:54:17.821384 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Nov 23 22:54:17.821454 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.821514 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Nov 23 22:54:17.821571 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 22:54:17.821629 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 22:54:17.821694 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Nov 23 22:54:17.821783 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.821845 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Nov 23 22:54:17.821903 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 22:54:17.821961 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Nov 23 22:54:17.822017 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Nov 23 22:54:17.822082 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.822144 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Nov 23 22:54:17.822218 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 22:54:17.822277 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Nov 23 22:54:17.822335 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Nov 23 22:54:17.822410 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.822470 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Nov 23 22:54:17.822532 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 22:54:17.822592 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Nov 23 22:54:17.822665 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 23 22:54:17.823094 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Nov 23 22:54:17.823225 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 22:54:17.823291 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 22:54:17.823361 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Nov 23 22:54:17.823426 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Nov 23 22:54:17.823496 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 22:54:17.823557 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Nov 23 22:54:17.823618 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 23 22:54:17.823676 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 22:54:17.824543 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 23 22:54:17.824618 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Nov 23 22:54:17.824701 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 23 22:54:17.824797 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Nov 23 22:54:17.824860 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Nov 23 22:54:17.824929 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 23 22:54:17.824989 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Nov 23 22:54:17.825059 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 23 22:54:17.825125 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Nov 23 22:54:17.825205 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Nov 23 22:54:17.825275 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 23 22:54:17.825336 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Nov 23 22:54:17.825396 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Nov 23 22:54:17.825465 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 23 22:54:17.825525 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Nov 23 22:54:17.825587 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Nov 23 22:54:17.825647 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Nov 23 22:54:17.825711 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Nov 23 22:54:17.826443 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Nov 23 22:54:17.826510 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Nov 23 22:54:17.826573 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Nov 23 22:54:17.826632 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Nov 23 22:54:17.826695 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Nov 23 22:54:17.826892 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 23 22:54:17.826959 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Nov 23 22:54:17.827017 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Nov 23 22:54:17.827079 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 23 22:54:17.827139 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Nov 23 22:54:17.827251 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Nov 23 22:54:17.827318 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 23 22:54:17.827377 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Nov 23 22:54:17.827434 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Nov 23 22:54:17.827495 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 23 22:54:17.827554 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Nov 23 22:54:17.827610 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Nov 23 22:54:17.827675 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 23 22:54:17.827746 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Nov 23 22:54:17.827806 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Nov 23 22:54:17.827870 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 23 22:54:17.827927 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Nov 23 22:54:17.827984 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Nov 23 22:54:17.828045 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 23 22:54:17.828105 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Nov 23 22:54:17.828175 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Nov 23 22:54:17.828250 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Nov 23 22:54:17.828309 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Nov 23 22:54:17.828370 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Nov 23 22:54:17.828428 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Nov 23 22:54:17.828487 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Nov 23 22:54:17.828546 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Nov 23 22:54:17.828606 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Nov 23 22:54:17.828663 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Nov 23 22:54:17.829010 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Nov 23 22:54:17.829112 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Nov 23 22:54:17.829222 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Nov 23 22:54:17.829294 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Nov 23 22:54:17.829357 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Nov 23 22:54:17.829423 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Nov 23 22:54:17.829486 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Nov 23 22:54:17.829546 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Nov 23 22:54:17.829605 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Nov 23 22:54:17.829662 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Nov 23 22:54:17.829778 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Nov 23 22:54:17.831802 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Nov 23 22:54:17.831880 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Nov 23 22:54:17.831946 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 23 22:54:17.832012 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Nov 23 22:54:17.832072 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 23 22:54:17.832135 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Nov 23 22:54:17.832215 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 23 22:54:17.832282 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Nov 23 22:54:17.832340 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 23 22:54:17.832400 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Nov 23 22:54:17.832461 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 23 22:54:17.832522 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Nov 23 22:54:17.832581 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 23 22:54:17.832643 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Nov 23 22:54:17.832703 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 23 22:54:17.832894 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Nov 23 22:54:17.832960 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 23 22:54:17.833021 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Nov 23 22:54:17.833078 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Nov 23 22:54:17.833143 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Nov 23 22:54:17.833265 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Nov 23 22:54:17.833335 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 23 22:54:17.833401 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Nov 23 22:54:17.833469 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 23 22:54:17.833536 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 23 22:54:17.833600 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Nov 23 22:54:17.833665 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 22:54:17.833755 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Nov 23 22:54:17.833825 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 23 22:54:17.835627 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 23 22:54:17.835761 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Nov 23 22:54:17.835831 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 22:54:17.835899 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Nov 23 22:54:17.835960 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Nov 23 22:54:17.836020 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 23 22:54:17.836081 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 23 22:54:17.836148 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Nov 23 22:54:17.836259 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 22:54:17.836333 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Nov 23 22:54:17.836397 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 23 22:54:17.836457 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 23 22:54:17.836517 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Nov 23 22:54:17.836577 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 22:54:17.836658 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Nov 23 22:54:17.836720 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Nov 23 22:54:17.836802 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 23 22:54:17.836863 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 23 22:54:17.836923 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Nov 23 22:54:17.836981 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 22:54:17.837047 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Nov 23 22:54:17.837112 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Nov 23 22:54:17.837189 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 23 22:54:17.837269 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 23 22:54:17.837328 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Nov 23 22:54:17.837408 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 22:54:17.837476 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Nov 23 22:54:17.837537 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Nov 23 22:54:17.837602 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Nov 23 22:54:17.837666 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 23 22:54:17.837747 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 23 22:54:17.837811 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Nov 23 22:54:17.837873 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 22:54:17.837936 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 23 22:54:17.838000 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 23 22:54:17.838058 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Nov 23 22:54:17.838117 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 22:54:17.838193 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 23 22:54:17.838254 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Nov 23 22:54:17.838314 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Nov 23 22:54:17.838373 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 22:54:17.838435 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 23 22:54:17.838488 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 23 22:54:17.838539 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 23 22:54:17.838606 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 23 22:54:17.838661 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Nov 23 22:54:17.838717 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Nov 23 22:54:17.840463 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Nov 23 22:54:17.840524 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Nov 23 22:54:17.840579 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Nov 23 22:54:17.840647 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Nov 23 22:54:17.840701 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Nov 23 22:54:17.840780 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Nov 23 22:54:17.840850 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Nov 23 22:54:17.840904 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Nov 23 22:54:17.840957 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Nov 23 22:54:17.841019 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Nov 23 22:54:17.841072 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Nov 23 22:54:17.841125 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Nov 23 22:54:17.841252 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Nov 23 22:54:17.841317 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Nov 23 22:54:17.841371 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Nov 23 22:54:17.841434 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Nov 23 22:54:17.841489 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Nov 23 22:54:17.841545 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Nov 23 22:54:17.841608 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Nov 23 22:54:17.841665 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Nov 23 22:54:17.841718 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Nov 23 22:54:17.841799 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Nov 23 22:54:17.841855 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Nov 23 22:54:17.841909 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Nov 23 22:54:17.841919 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 23 22:54:17.841927 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 23 22:54:17.841937 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 23 22:54:17.841945 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 23 22:54:17.841952 kernel: iommu: Default domain type: Translated Nov 23 22:54:17.841960 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 23 22:54:17.841967 kernel: efivars: Registered efivars operations Nov 23 22:54:17.841975 kernel: vgaarb: loaded Nov 23 22:54:17.841982 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 23 22:54:17.841989 kernel: VFS: Disk quotas dquot_6.6.0 Nov 23 22:54:17.841997 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 23 22:54:17.842006 kernel: pnp: PnP ACPI init Nov 23 22:54:17.842075 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 23 22:54:17.842086 kernel: pnp: PnP ACPI: found 1 devices Nov 23 22:54:17.842094 kernel: NET: Registered PF_INET protocol family Nov 23 22:54:17.842101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 23 22:54:17.842109 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 23 22:54:17.842117 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 23 22:54:17.842125 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 23 22:54:17.842135 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 23 22:54:17.842142 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 23 22:54:17.842149 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:54:17.842157 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 23 22:54:17.842173 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 23 22:54:17.842250 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Nov 23 22:54:17.842261 kernel: PCI: CLS 0 bytes, default 64 Nov 23 22:54:17.842268 kernel: kvm [1]: HYP mode not available Nov 23 22:54:17.842276 kernel: Initialise system trusted keyrings Nov 23 22:54:17.842286 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 23 22:54:17.842293 kernel: Key type asymmetric registered Nov 23 22:54:17.842300 kernel: Asymmetric key parser 'x509' registered Nov 23 22:54:17.842308 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 23 22:54:17.842315 kernel: io scheduler mq-deadline registered Nov 23 22:54:17.842323 kernel: io scheduler kyber registered Nov 23 22:54:17.842330 kernel: io scheduler bfq registered Nov 23 22:54:17.842339 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 23 22:54:17.842400 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Nov 23 22:54:17.842462 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Nov 23 22:54:17.842521 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.842582 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Nov 23 22:54:17.842646 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Nov 23 22:54:17.842714 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.843871 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Nov 23 22:54:17.843939 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Nov 23 22:54:17.844000 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.844070 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Nov 23 22:54:17.844138 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Nov 23 22:54:17.844238 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.844308 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Nov 23 22:54:17.844370 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Nov 23 22:54:17.844433 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.844497 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Nov 23 22:54:17.844560 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Nov 23 22:54:17.844629 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.844695 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Nov 23 22:54:17.845845 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Nov 23 22:54:17.845925 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.845990 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Nov 23 22:54:17.846052 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Nov 23 22:54:17.846111 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.846128 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Nov 23 22:54:17.846236 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Nov 23 22:54:17.846303 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Nov 23 22:54:17.846363 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 23 22:54:17.846374 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 23 22:54:17.846381 kernel: ACPI: button: Power Button [PWRB] Nov 23 22:54:17.846389 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 23 22:54:17.846454 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Nov 23 22:54:17.846534 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Nov 23 22:54:17.846549 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 23 22:54:17.846557 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 23 22:54:17.846631 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Nov 23 22:54:17.846644 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Nov 23 22:54:17.846652 kernel: thunder_xcv, ver 1.0 Nov 23 22:54:17.846660 kernel: thunder_bgx, ver 1.0 Nov 23 22:54:17.846668 kernel: nicpf, ver 1.0 Nov 23 22:54:17.846676 kernel: nicvf, ver 1.0 Nov 23 22:54:17.847836 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 23 22:54:17.847915 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-23T22:54:17 UTC (1763938457) Nov 23 22:54:17.847925 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 23 22:54:17.847933 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 23 22:54:17.847940 kernel: watchdog: NMI not fully supported Nov 23 22:54:17.847948 kernel: watchdog: Hard watchdog permanently disabled Nov 23 22:54:17.847955 kernel: NET: Registered PF_INET6 protocol family Nov 23 22:54:17.847963 kernel: Segment Routing with IPv6 Nov 23 22:54:17.847976 kernel: In-situ OAM (IOAM) with IPv6 Nov 23 22:54:17.847984 kernel: NET: Registered PF_PACKET protocol family Nov 23 22:54:17.847992 kernel: Key type dns_resolver registered Nov 23 22:54:17.847999 kernel: registered taskstats version 1 Nov 23 22:54:17.848007 kernel: Loading compiled-in X.509 certificates Nov 23 22:54:17.848014 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 98b0841f2908e51633cd38699ad12796cadb7bd1' Nov 23 22:54:17.848022 kernel: Demotion targets for Node 0: null Nov 23 22:54:17.848029 kernel: Key type .fscrypt registered Nov 23 22:54:17.848036 kernel: Key type fscrypt-provisioning registered Nov 23 22:54:17.848045 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 23 22:54:17.848053 kernel: ima: Allocated hash algorithm: sha1 Nov 23 22:54:17.848060 kernel: ima: No architecture policies found Nov 23 22:54:17.848068 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 23 22:54:17.848076 kernel: clk: Disabling unused clocks Nov 23 22:54:17.848083 kernel: PM: genpd: Disabling unused power domains Nov 23 22:54:17.848090 kernel: Warning: unable to open an initial console. Nov 23 22:54:17.848099 kernel: Freeing unused kernel memory: 39552K Nov 23 22:54:17.848106 kernel: Run /init as init process Nov 23 22:54:17.848114 kernel: with arguments: Nov 23 22:54:17.848124 kernel: /init Nov 23 22:54:17.848132 kernel: with environment: Nov 23 22:54:17.848141 kernel: HOME=/ Nov 23 22:54:17.848149 kernel: TERM=linux Nov 23 22:54:17.848157 systemd[1]: Successfully made /usr/ read-only. Nov 23 22:54:17.848181 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:54:17.848190 systemd[1]: Detected virtualization kvm. Nov 23 22:54:17.848200 systemd[1]: Detected architecture arm64. Nov 23 22:54:17.848207 systemd[1]: Running in initrd. Nov 23 22:54:17.848215 systemd[1]: No hostname configured, using default hostname. Nov 23 22:54:17.848223 systemd[1]: Hostname set to . Nov 23 22:54:17.848231 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:54:17.848238 systemd[1]: Queued start job for default target initrd.target. Nov 23 22:54:17.848246 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:54:17.848254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:54:17.848265 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 23 22:54:17.848274 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:54:17.848283 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 23 22:54:17.848305 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 23 22:54:17.848320 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 23 22:54:17.848329 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 23 22:54:17.848337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:54:17.848347 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:54:17.848355 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:54:17.848363 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:54:17.848371 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:54:17.848379 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:54:17.848386 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:54:17.848394 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:54:17.848402 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 23 22:54:17.848410 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 23 22:54:17.848419 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:54:17.848427 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:54:17.848436 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:54:17.848443 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:54:17.848451 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 23 22:54:17.848459 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:54:17.848467 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 23 22:54:17.848475 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 23 22:54:17.848485 systemd[1]: Starting systemd-fsck-usr.service... Nov 23 22:54:17.848493 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:54:17.848500 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:54:17.848510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:17.848519 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 23 22:54:17.848527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:54:17.848538 systemd[1]: Finished systemd-fsck-usr.service. Nov 23 22:54:17.848546 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:54:17.848585 systemd-journald[245]: Collecting audit messages is disabled. Nov 23 22:54:17.848608 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:17.848616 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 23 22:54:17.848625 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 23 22:54:17.848633 kernel: Bridge firewalling registered Nov 23 22:54:17.848641 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:54:17.848649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:54:17.848658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:54:17.848667 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:54:17.848677 systemd-journald[245]: Journal started Nov 23 22:54:17.848696 systemd-journald[245]: Runtime Journal (/run/log/journal/3629654f70154343a4e3531731bbf882) is 8M, max 76.5M, 68.5M free. Nov 23 22:54:17.796038 systemd-modules-load[247]: Inserted module 'overlay' Nov 23 22:54:17.851106 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:54:17.822770 systemd-modules-load[247]: Inserted module 'br_netfilter' Nov 23 22:54:17.855194 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:54:17.854994 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:54:17.864528 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 23 22:54:17.867547 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:54:17.880453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:54:17.891240 systemd-tmpfiles[277]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 23 22:54:17.895198 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:54:17.897759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:54:17.904296 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=c01798725f53da1d62d166036caa3c72754cb158fe469d9d9e3df0d6cadc7a34 Nov 23 22:54:17.948526 systemd-resolved[293]: Positive Trust Anchors: Nov 23 22:54:17.948543 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:54:17.948575 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:54:17.960324 systemd-resolved[293]: Defaulting to hostname 'linux'. Nov 23 22:54:17.962210 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:54:17.962870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:54:18.025822 kernel: SCSI subsystem initialized Nov 23 22:54:18.029761 kernel: Loading iSCSI transport class v2.0-870. Nov 23 22:54:18.037777 kernel: iscsi: registered transport (tcp) Nov 23 22:54:18.051799 kernel: iscsi: registered transport (qla4xxx) Nov 23 22:54:18.051949 kernel: QLogic iSCSI HBA Driver Nov 23 22:54:18.075843 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:54:18.100290 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:54:18.103687 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:54:18.171815 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 23 22:54:18.174408 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 23 22:54:18.248783 kernel: raid6: neonx8 gen() 15684 MB/s Nov 23 22:54:18.265941 kernel: raid6: neonx4 gen() 15742 MB/s Nov 23 22:54:18.282782 kernel: raid6: neonx2 gen() 13167 MB/s Nov 23 22:54:18.299783 kernel: raid6: neonx1 gen() 10416 MB/s Nov 23 22:54:18.316796 kernel: raid6: int64x8 gen() 6856 MB/s Nov 23 22:54:18.333787 kernel: raid6: int64x4 gen() 7312 MB/s Nov 23 22:54:18.350818 kernel: raid6: int64x2 gen() 6074 MB/s Nov 23 22:54:18.367797 kernel: raid6: int64x1 gen() 5006 MB/s Nov 23 22:54:18.367893 kernel: raid6: using algorithm neonx4 gen() 15742 MB/s Nov 23 22:54:18.384811 kernel: raid6: .... xor() 12303 MB/s, rmw enabled Nov 23 22:54:18.384894 kernel: raid6: using neon recovery algorithm Nov 23 22:54:18.389924 kernel: xor: measuring software checksum speed Nov 23 22:54:18.390009 kernel: 8regs : 20582 MB/sec Nov 23 22:54:18.390038 kernel: 32regs : 21693 MB/sec Nov 23 22:54:18.390064 kernel: arm64_neon : 28099 MB/sec Nov 23 22:54:18.390783 kernel: xor: using function: arm64_neon (28099 MB/sec) Nov 23 22:54:18.445775 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 23 22:54:18.454016 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:54:18.457269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:54:18.490145 systemd-udevd[494]: Using default interface naming scheme 'v255'. Nov 23 22:54:18.494570 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:54:18.499076 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 23 22:54:18.532961 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation Nov 23 22:54:18.567203 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:54:18.570787 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:54:18.649856 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:54:18.654068 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 23 22:54:18.759962 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 23 22:54:18.766992 kernel: ACPI: bus type USB registered Nov 23 22:54:18.767050 kernel: usbcore: registered new interface driver usbfs Nov 23 22:54:18.767061 kernel: usbcore: registered new interface driver hub Nov 23 22:54:18.767827 kernel: scsi host0: Virtio SCSI HBA Nov 23 22:54:18.772750 kernel: usbcore: registered new device driver usb Nov 23 22:54:18.774902 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 23 22:54:18.774977 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 23 22:54:18.790976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:54:18.791121 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:18.794989 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:18.798639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:18.802138 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:54:18.814754 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 22:54:18.814959 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 23 22:54:18.815040 kernel: sr 0:0:0:0: Power-on or device reset occurred Nov 23 22:54:18.816251 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Nov 23 22:54:18.816409 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 23 22:54:18.817767 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 23 22:54:18.821549 kernel: sd 0:0:0:1: Power-on or device reset occurred Nov 23 22:54:18.822857 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 23 22:54:18.823025 kernel: sd 0:0:0:1: [sda] Write Protect is off Nov 23 22:54:18.823103 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Nov 23 22:54:18.823207 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 23 22:54:18.826750 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 23 22:54:18.829931 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 23 22:54:18.830174 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 23 22:54:18.830267 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 23 22:54:18.830343 kernel: hub 1-0:1.0: USB hub found Nov 23 22:54:18.830447 kernel: hub 1-0:1.0: 4 ports detected Nov 23 22:54:18.830525 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 23 22:54:18.832764 kernel: hub 2-0:1.0: USB hub found Nov 23 22:54:18.832972 kernel: hub 2-0:1.0: 4 ports detected Nov 23 22:54:18.836750 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 23 22:54:18.836801 kernel: GPT:17805311 != 80003071 Nov 23 22:54:18.836811 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 23 22:54:18.836821 kernel: GPT:17805311 != 80003071 Nov 23 22:54:18.836831 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 23 22:54:18.837745 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:18.838774 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Nov 23 22:54:18.841492 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:18.919666 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 23 22:54:18.935051 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 23 22:54:18.945072 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 23 22:54:18.946601 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 23 22:54:18.948765 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 23 22:54:18.960383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 22:54:18.963701 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:54:18.964414 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:54:18.965981 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:54:18.968888 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 23 22:54:18.971002 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 23 22:54:18.985216 disk-uuid[607]: Primary Header is updated. Nov 23 22:54:18.985216 disk-uuid[607]: Secondary Entries is updated. Nov 23 22:54:18.985216 disk-uuid[607]: Secondary Header is updated. Nov 23 22:54:18.994091 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:54:18.997851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:19.014801 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:19.069059 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 23 22:54:19.201747 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Nov 23 22:54:19.201805 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 23 22:54:19.202781 kernel: usbcore: registered new interface driver usbhid Nov 23 22:54:19.202818 kernel: usbhid: USB HID core driver Nov 23 22:54:19.306765 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Nov 23 22:54:19.432753 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Nov 23 22:54:19.484745 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Nov 23 22:54:20.026834 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 23 22:54:20.028585 disk-uuid[610]: The operation has completed successfully. Nov 23 22:54:20.084568 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 23 22:54:20.085776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 23 22:54:20.120854 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 23 22:54:20.155670 sh[631]: Success Nov 23 22:54:20.170981 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 23 22:54:20.171050 kernel: device-mapper: uevent: version 1.0.3 Nov 23 22:54:20.171883 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 23 22:54:20.181773 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 23 22:54:20.235115 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 23 22:54:20.236894 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 23 22:54:20.247925 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 23 22:54:20.259770 kernel: BTRFS: device fsid 9fed50bd-c943-4402-9e9a-f39625143eb9 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (644) Nov 23 22:54:20.261794 kernel: BTRFS info (device dm-0): first mount of filesystem 9fed50bd-c943-4402-9e9a-f39625143eb9 Nov 23 22:54:20.261849 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:20.269105 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 23 22:54:20.269179 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 23 22:54:20.269738 kernel: BTRFS info (device dm-0): enabling free space tree Nov 23 22:54:20.271358 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 23 22:54:20.272597 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:54:20.273964 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 23 22:54:20.274927 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 23 22:54:20.279318 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 23 22:54:20.306791 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (673) Nov 23 22:54:20.309241 kernel: BTRFS info (device sda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:20.309980 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:20.314021 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 22:54:20.314088 kernel: BTRFS info (device sda6): turning on async discard Nov 23 22:54:20.314801 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 22:54:20.320824 kernel: BTRFS info (device sda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:20.322922 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 23 22:54:20.324688 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 23 22:54:20.462401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:54:20.466431 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:54:20.507451 ignition[720]: Ignition 2.22.0 Nov 23 22:54:20.507463 ignition[720]: Stage: fetch-offline Nov 23 22:54:20.507498 ignition[720]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:20.507506 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:20.507599 ignition[720]: parsed url from cmdline: "" Nov 23 22:54:20.507602 ignition[720]: no config URL provided Nov 23 22:54:20.507607 ignition[720]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:54:20.507613 ignition[720]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:54:20.507619 ignition[720]: failed to fetch config: resource requires networking Nov 23 22:54:20.510876 ignition[720]: Ignition finished successfully Nov 23 22:54:20.518510 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:54:20.520942 systemd-networkd[821]: lo: Link UP Nov 23 22:54:20.520955 systemd-networkd[821]: lo: Gained carrier Nov 23 22:54:20.522541 systemd-networkd[821]: Enumeration completed Nov 23 22:54:20.523011 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:20.523014 systemd-networkd[821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:20.524680 systemd-networkd[821]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:20.524684 systemd-networkd[821]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:20.525039 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:54:20.525072 systemd-networkd[821]: eth0: Link UP Nov 23 22:54:20.525904 systemd-networkd[821]: eth1: Link UP Nov 23 22:54:20.526092 systemd-networkd[821]: eth0: Gained carrier Nov 23 22:54:20.526104 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:20.526569 systemd[1]: Reached target network.target - Network. Nov 23 22:54:20.530536 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 23 22:54:20.533372 systemd-networkd[821]: eth1: Gained carrier Nov 23 22:54:20.533390 systemd-networkd[821]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:20.559828 systemd-networkd[821]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 22:54:20.571795 ignition[826]: Ignition 2.22.0 Nov 23 22:54:20.571806 ignition[826]: Stage: fetch Nov 23 22:54:20.571958 ignition[826]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:20.571967 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:20.572694 ignition[826]: parsed url from cmdline: "" Nov 23 22:54:20.572699 ignition[826]: no config URL provided Nov 23 22:54:20.572706 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Nov 23 22:54:20.572719 ignition[826]: no config at "/usr/lib/ignition/user.ign" Nov 23 22:54:20.572774 ignition[826]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 23 22:54:20.573651 ignition[826]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 23 22:54:20.583863 systemd-networkd[821]: eth0: DHCPv4 address 188.245.196.203/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 22:54:20.774657 ignition[826]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Nov 23 22:54:20.780511 ignition[826]: GET result: OK Nov 23 22:54:20.781444 ignition[826]: parsing config with SHA512: db918b812738ad4f6730dd3c2fc90c443f79d0f35a2a7784d629b6a7ff0a8c87acc56830a261ce4d6fa8b543d1954b59bb61a535c3866f6cac69959ac2806a89 Nov 23 22:54:20.791591 unknown[826]: fetched base config from "system" Nov 23 22:54:20.791609 unknown[826]: fetched base config from "system" Nov 23 22:54:20.792073 ignition[826]: fetch: fetch complete Nov 23 22:54:20.791628 unknown[826]: fetched user config from "hetzner" Nov 23 22:54:20.792080 ignition[826]: fetch: fetch passed Nov 23 22:54:20.792170 ignition[826]: Ignition finished successfully Nov 23 22:54:20.795019 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 23 22:54:20.801452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 23 22:54:20.844067 ignition[833]: Ignition 2.22.0 Nov 23 22:54:20.844089 ignition[833]: Stage: kargs Nov 23 22:54:20.844447 ignition[833]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:20.844458 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:20.845474 ignition[833]: kargs: kargs passed Nov 23 22:54:20.845538 ignition[833]: Ignition finished successfully Nov 23 22:54:20.850582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 23 22:54:20.856124 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 23 22:54:20.895464 ignition[839]: Ignition 2.22.0 Nov 23 22:54:20.895485 ignition[839]: Stage: disks Nov 23 22:54:20.895747 ignition[839]: no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:20.895759 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:20.896544 ignition[839]: disks: disks passed Nov 23 22:54:20.898021 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 23 22:54:20.896593 ignition[839]: Ignition finished successfully Nov 23 22:54:20.899097 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 23 22:54:20.900935 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 23 22:54:20.901703 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:54:20.902424 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:54:20.903559 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:54:20.906581 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 23 22:54:20.941636 systemd-fsck[848]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 23 22:54:20.946789 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 23 22:54:20.954019 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 23 22:54:21.050801 kernel: EXT4-fs (sda9): mounted filesystem c70a3a7b-80c4-4387-ab29-1bf940859b86 r/w with ordered data mode. Quota mode: none. Nov 23 22:54:21.051273 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 23 22:54:21.052670 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 23 22:54:21.056414 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:54:21.061784 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 23 22:54:21.077391 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 23 22:54:21.078579 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 23 22:54:21.078629 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:54:21.095739 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 23 22:54:21.098014 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 23 22:54:21.100064 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (856) Nov 23 22:54:21.103142 kernel: BTRFS info (device sda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:21.103204 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:21.111462 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 22:54:21.111524 kernel: BTRFS info (device sda6): turning on async discard Nov 23 22:54:21.111535 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 22:54:21.115171 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:54:21.160321 initrd-setup-root[885]: cut: /sysroot/etc/passwd: No such file or directory Nov 23 22:54:21.162841 coreos-metadata[858]: Nov 23 22:54:21.162 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 23 22:54:21.165511 coreos-metadata[858]: Nov 23 22:54:21.164 INFO Fetch successful Nov 23 22:54:21.165511 coreos-metadata[858]: Nov 23 22:54:21.164 INFO wrote hostname ci-4459-1-2-5-0c65a92823 to /sysroot/etc/hostname Nov 23 22:54:21.168798 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 22:54:21.173355 initrd-setup-root[893]: cut: /sysroot/etc/group: No such file or directory Nov 23 22:54:21.178608 initrd-setup-root[900]: cut: /sysroot/etc/shadow: No such file or directory Nov 23 22:54:21.183864 initrd-setup-root[907]: cut: /sysroot/etc/gshadow: No such file or directory Nov 23 22:54:21.293240 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 23 22:54:21.295194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 23 22:54:21.301067 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 23 22:54:21.315570 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 23 22:54:21.317758 kernel: BTRFS info (device sda6): last unmount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:21.339598 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 23 22:54:21.356052 ignition[975]: INFO : Ignition 2.22.0 Nov 23 22:54:21.356052 ignition[975]: INFO : Stage: mount Nov 23 22:54:21.358212 ignition[975]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:21.358212 ignition[975]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:21.358212 ignition[975]: INFO : mount: mount passed Nov 23 22:54:21.358212 ignition[975]: INFO : Ignition finished successfully Nov 23 22:54:21.361082 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 23 22:54:21.363627 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 23 22:54:21.390233 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 23 22:54:21.423853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (986) Nov 23 22:54:21.426755 kernel: BTRFS info (device sda6): first mount of filesystem b13f7cbd-5564-4927-b75d-d55dbc1bbfa7 Nov 23 22:54:21.426819 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Nov 23 22:54:21.430861 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 23 22:54:21.430925 kernel: BTRFS info (device sda6): turning on async discard Nov 23 22:54:21.430935 kernel: BTRFS info (device sda6): enabling free space tree Nov 23 22:54:21.434135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 23 22:54:21.477754 ignition[1003]: INFO : Ignition 2.22.0 Nov 23 22:54:21.477754 ignition[1003]: INFO : Stage: files Nov 23 22:54:21.477754 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:21.477754 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:21.480374 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Nov 23 22:54:21.482631 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 23 22:54:21.482631 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 23 22:54:21.486061 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 23 22:54:21.487055 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 23 22:54:21.487771 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 23 22:54:21.487612 unknown[1003]: wrote ssh authorized keys file for user: core Nov 23 22:54:21.491591 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 22:54:21.491591 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 23 22:54:21.571829 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:54:21.656679 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 23 22:54:21.671450 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:54:21.671450 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 23 22:54:21.671450 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:54:21.678556 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:54:21.678556 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:54:21.678556 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 23 22:54:21.814578 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 23 22:54:21.970911 systemd-networkd[821]: eth0: Gained IPv6LL Nov 23 22:54:22.415819 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 23 22:54:22.415819 ignition[1003]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 23 22:54:22.419400 ignition[1003]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 23 22:54:22.435642 ignition[1003]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 23 22:54:22.435642 ignition[1003]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:54:22.435642 ignition[1003]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 23 22:54:22.435642 ignition[1003]: INFO : files: files passed Nov 23 22:54:22.435642 ignition[1003]: INFO : Ignition finished successfully Nov 23 22:54:22.425037 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 23 22:54:22.428037 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 23 22:54:22.435547 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 23 22:54:22.449457 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 23 22:54:22.449849 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 23 22:54:22.459698 initrd-setup-root-after-ignition[1032]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:54:22.459698 initrd-setup-root-after-ignition[1032]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:54:22.462559 initrd-setup-root-after-ignition[1036]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 23 22:54:22.463741 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:54:22.464685 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 23 22:54:22.466799 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 23 22:54:22.529992 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 23 22:54:22.530307 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 23 22:54:22.533822 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 23 22:54:22.536491 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 23 22:54:22.538453 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 23 22:54:22.539391 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 23 22:54:22.546902 systemd-networkd[821]: eth1: Gained IPv6LL Nov 23 22:54:22.571500 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:54:22.574377 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 23 22:54:22.611949 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:54:22.613581 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:54:22.614526 systemd[1]: Stopped target timers.target - Timer Units. Nov 23 22:54:22.616187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 23 22:54:22.616341 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 23 22:54:22.619793 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 23 22:54:22.620663 systemd[1]: Stopped target basic.target - Basic System. Nov 23 22:54:22.621885 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 23 22:54:22.623083 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 23 22:54:22.624326 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 23 22:54:22.625446 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 23 22:54:22.626662 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 23 22:54:22.627811 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 23 22:54:22.629205 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 23 22:54:22.630347 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 23 22:54:22.631514 systemd[1]: Stopped target swap.target - Swaps. Nov 23 22:54:22.632426 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 23 22:54:22.632558 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 23 22:54:22.633824 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:54:22.634456 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:54:22.635518 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 23 22:54:22.636016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:54:22.636742 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 23 22:54:22.636871 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 23 22:54:22.638473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 23 22:54:22.638600 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 23 22:54:22.639793 systemd[1]: ignition-files.service: Deactivated successfully. Nov 23 22:54:22.639912 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 23 22:54:22.641079 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 23 22:54:22.641187 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 23 22:54:22.643113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 23 22:54:22.647001 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 23 22:54:22.650252 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 23 22:54:22.650419 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:54:22.652579 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 23 22:54:22.652701 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 23 22:54:22.660614 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 23 22:54:22.660714 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 23 22:54:22.671669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 23 22:54:22.675091 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 23 22:54:22.676759 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 23 22:54:22.685759 ignition[1056]: INFO : Ignition 2.22.0 Nov 23 22:54:22.685759 ignition[1056]: INFO : Stage: umount Nov 23 22:54:22.685759 ignition[1056]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 23 22:54:22.685759 ignition[1056]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 23 22:54:22.688664 ignition[1056]: INFO : umount: umount passed Nov 23 22:54:22.689285 ignition[1056]: INFO : Ignition finished successfully Nov 23 22:54:22.692126 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 23 22:54:22.692342 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 23 22:54:22.694350 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 23 22:54:22.694415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 23 22:54:22.695683 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 23 22:54:22.695765 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 23 22:54:22.698089 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 23 22:54:22.698216 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 23 22:54:22.700074 systemd[1]: Stopped target network.target - Network. Nov 23 22:54:22.701072 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 23 22:54:22.701132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 23 22:54:22.702135 systemd[1]: Stopped target paths.target - Path Units. Nov 23 22:54:22.702970 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 23 22:54:22.706897 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:54:22.709258 systemd[1]: Stopped target slices.target - Slice Units. Nov 23 22:54:22.710203 systemd[1]: Stopped target sockets.target - Socket Units. Nov 23 22:54:22.711305 systemd[1]: iscsid.socket: Deactivated successfully. Nov 23 22:54:22.711357 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 23 22:54:22.712422 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 23 22:54:22.712457 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 23 22:54:22.713547 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 23 22:54:22.713611 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 23 22:54:22.714795 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 23 22:54:22.714841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 23 22:54:22.715685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 23 22:54:22.715743 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 23 22:54:22.716861 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 23 22:54:22.717836 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 23 22:54:22.728971 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 23 22:54:22.729203 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 23 22:54:22.734718 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 23 22:54:22.735030 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 23 22:54:22.735135 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 23 22:54:22.737548 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 23 22:54:22.738898 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 23 22:54:22.739564 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 23 22:54:22.739606 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:54:22.741680 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 23 22:54:22.743288 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 23 22:54:22.743363 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 23 22:54:22.745519 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 23 22:54:22.745584 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:54:22.747956 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 23 22:54:22.748565 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 23 22:54:22.749277 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 23 22:54:22.749322 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:54:22.751267 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:54:22.758641 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 23 22:54:22.758864 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:54:22.766969 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 23 22:54:22.769169 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:54:22.770593 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 23 22:54:22.770637 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 23 22:54:22.773350 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 23 22:54:22.773386 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:54:22.774683 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 23 22:54:22.774753 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 23 22:54:22.776492 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 23 22:54:22.776545 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 23 22:54:22.778055 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 23 22:54:22.778110 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 23 22:54:22.780471 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 23 22:54:22.782882 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 23 22:54:22.782965 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:54:22.783825 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 23 22:54:22.783875 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:54:22.784663 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 23 22:54:22.784702 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:54:22.786013 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 23 22:54:22.786058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:54:22.789100 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:54:22.789372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:22.794468 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 23 22:54:22.794542 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Nov 23 22:54:22.794581 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 23 22:54:22.794622 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 23 22:54:22.795138 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 23 22:54:22.795280 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 23 22:54:22.803794 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 23 22:54:22.804990 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 23 22:54:22.806322 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 23 22:54:22.808099 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 23 22:54:22.829372 systemd[1]: Switching root. Nov 23 22:54:22.862579 systemd-journald[245]: Journal stopped Nov 23 22:54:23.869604 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Nov 23 22:54:23.869671 kernel: SELinux: policy capability network_peer_controls=1 Nov 23 22:54:23.869689 kernel: SELinux: policy capability open_perms=1 Nov 23 22:54:23.869700 kernel: SELinux: policy capability extended_socket_class=1 Nov 23 22:54:23.869710 kernel: SELinux: policy capability always_check_network=0 Nov 23 22:54:23.869719 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 23 22:54:23.869741 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 23 22:54:23.869753 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 23 22:54:23.869762 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 23 22:54:23.869772 kernel: SELinux: policy capability userspace_initial_context=0 Nov 23 22:54:23.869789 kernel: audit: type=1403 audit(1763938463.051:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 23 22:54:23.869803 systemd[1]: Successfully loaded SELinux policy in 71.744ms. Nov 23 22:54:23.869825 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.332ms. Nov 23 22:54:23.869837 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 23 22:54:23.869849 systemd[1]: Detected virtualization kvm. Nov 23 22:54:23.869859 systemd[1]: Detected architecture arm64. Nov 23 22:54:23.869871 systemd[1]: Detected first boot. Nov 23 22:54:23.869881 systemd[1]: Hostname set to . Nov 23 22:54:23.869895 systemd[1]: Initializing machine ID from VM UUID. Nov 23 22:54:23.869907 zram_generator::config[1100]: No configuration found. Nov 23 22:54:23.869919 kernel: NET: Registered PF_VSOCK protocol family Nov 23 22:54:23.869930 systemd[1]: Populated /etc with preset unit settings. Nov 23 22:54:23.869941 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 23 22:54:23.869951 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 23 22:54:23.869964 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 23 22:54:23.869975 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 23 22:54:23.869985 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 23 22:54:23.872834 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 23 22:54:23.872852 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 23 22:54:23.872863 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 23 22:54:23.872873 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 23 22:54:23.872884 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 23 22:54:23.872895 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 23 22:54:23.872911 systemd[1]: Created slice user.slice - User and Session Slice. Nov 23 22:54:23.872921 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 23 22:54:23.872933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 23 22:54:23.872952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 23 22:54:23.872964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 23 22:54:23.872979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 23 22:54:23.872991 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 23 22:54:23.873003 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 23 22:54:23.873013 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 23 22:54:23.873027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 23 22:54:23.873038 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 23 22:54:23.873048 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 23 22:54:23.873058 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 23 22:54:23.873068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 23 22:54:23.873082 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 23 22:54:23.873094 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 23 22:54:23.873104 systemd[1]: Reached target slices.target - Slice Units. Nov 23 22:54:23.873114 systemd[1]: Reached target swap.target - Swaps. Nov 23 22:54:23.873125 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 23 22:54:23.873136 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 23 22:54:23.873182 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 23 22:54:23.873195 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 23 22:54:23.873206 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 23 22:54:23.873217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 23 22:54:23.873231 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 23 22:54:23.873242 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 23 22:54:23.873253 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 23 22:54:23.873263 systemd[1]: Mounting media.mount - External Media Directory... Nov 23 22:54:23.873274 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 23 22:54:23.873286 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 23 22:54:23.873296 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 23 22:54:23.873308 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 23 22:54:23.873318 systemd[1]: Reached target machines.target - Containers. Nov 23 22:54:23.873330 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 23 22:54:23.873341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:23.873352 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 23 22:54:23.873363 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 23 22:54:23.873374 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:54:23.873387 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:54:23.873398 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:54:23.873410 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 23 22:54:23.873421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:54:23.873432 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 23 22:54:23.873448 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 23 22:54:23.873459 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 23 22:54:23.873469 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 23 22:54:23.873480 systemd[1]: Stopped systemd-fsck-usr.service. Nov 23 22:54:23.873492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:23.873507 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 23 22:54:23.873520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 23 22:54:23.873535 kernel: loop: module loaded Nov 23 22:54:23.873547 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 23 22:54:23.873562 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 23 22:54:23.873574 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 23 22:54:23.873584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 23 22:54:23.873595 systemd[1]: verity-setup.service: Deactivated successfully. Nov 23 22:54:23.873606 systemd[1]: Stopped verity-setup.service. Nov 23 22:54:23.873617 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 23 22:54:23.873627 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 23 22:54:23.873639 systemd[1]: Mounted media.mount - External Media Directory. Nov 23 22:54:23.873650 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 23 22:54:23.873660 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 23 22:54:23.873671 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 23 22:54:23.873681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 23 22:54:23.873692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:54:23.873703 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:54:23.873713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:54:23.876811 kernel: ACPI: bus type drm_connector registered Nov 23 22:54:23.876855 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:54:23.876870 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:54:23.876881 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:54:23.876893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 23 22:54:23.876906 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 23 22:54:23.876917 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 23 22:54:23.876928 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 23 22:54:23.876939 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 23 22:54:23.876950 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 23 22:54:23.876964 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 23 22:54:23.877033 systemd-journald[1165]: Collecting audit messages is disabled. Nov 23 22:54:23.877059 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 23 22:54:23.877070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:23.877081 kernel: fuse: init (API version 7.41) Nov 23 22:54:23.877092 systemd-journald[1165]: Journal started Nov 23 22:54:23.877115 systemd-journald[1165]: Runtime Journal (/run/log/journal/3629654f70154343a4e3531731bbf882) is 8M, max 76.5M, 68.5M free. Nov 23 22:54:23.584352 systemd[1]: Queued start job for default target multi-user.target. Nov 23 22:54:23.608196 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 23 22:54:23.608684 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 23 22:54:23.883150 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 23 22:54:23.883217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:54:23.887111 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 23 22:54:23.888815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:54:23.894781 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 23 22:54:23.898263 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 23 22:54:23.908523 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 23 22:54:23.909776 systemd[1]: Started systemd-journald.service - Journal Service. Nov 23 22:54:23.910699 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 23 22:54:23.911624 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 23 22:54:23.911833 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 23 22:54:23.912716 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:54:23.912883 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:54:23.913785 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 23 22:54:23.913930 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 23 22:54:23.914833 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 23 22:54:23.928455 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 23 22:54:23.947743 kernel: loop0: detected capacity change from 0 to 100632 Nov 23 22:54:23.950935 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 23 22:54:23.960012 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 23 22:54:23.967876 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 23 22:54:23.973048 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 23 22:54:23.975232 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 23 22:54:23.978257 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 23 22:54:23.984410 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 23 22:54:23.990789 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 23 22:54:24.012080 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 23 22:54:24.017476 systemd-journald[1165]: Time spent on flushing to /var/log/journal/3629654f70154343a4e3531731bbf882 is 66.811ms for 1183 entries. Nov 23 22:54:24.017476 systemd-journald[1165]: System Journal (/var/log/journal/3629654f70154343a4e3531731bbf882) is 8M, max 584.8M, 576.8M free. Nov 23 22:54:24.101928 systemd-journald[1165]: Received client request to flush runtime journal. Nov 23 22:54:24.101982 kernel: loop1: detected capacity change from 0 to 119840 Nov 23 22:54:24.101997 kernel: loop2: detected capacity change from 0 to 8 Nov 23 22:54:24.041299 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Nov 23 22:54:24.041311 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Nov 23 22:54:24.057029 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 23 22:54:24.064060 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 23 22:54:24.065846 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 23 22:54:24.107499 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 23 22:54:24.110547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 23 22:54:24.119758 kernel: loop3: detected capacity change from 0 to 207008 Nov 23 22:54:24.145385 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 23 22:54:24.157004 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 23 22:54:24.169758 kernel: loop4: detected capacity change from 0 to 100632 Nov 23 22:54:24.193132 kernel: loop5: detected capacity change from 0 to 119840 Nov 23 22:54:24.198622 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Nov 23 22:54:24.198641 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Nov 23 22:54:24.211784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 23 22:54:24.215757 kernel: loop6: detected capacity change from 0 to 8 Nov 23 22:54:24.221758 kernel: loop7: detected capacity change from 0 to 207008 Nov 23 22:54:24.243798 (sd-merge)[1245]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 23 22:54:24.244318 (sd-merge)[1245]: Merged extensions into '/usr'. Nov 23 22:54:24.250442 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Nov 23 22:54:24.250467 systemd[1]: Reloading... Nov 23 22:54:24.366748 zram_generator::config[1269]: No configuration found. Nov 23 22:54:24.484757 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 23 22:54:24.614013 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 23 22:54:24.614650 systemd[1]: Reloading finished in 363 ms. Nov 23 22:54:24.635790 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 23 22:54:24.636827 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 23 22:54:24.637821 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 23 22:54:24.648803 systemd[1]: Starting ensure-sysext.service... Nov 23 22:54:24.651946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 23 22:54:24.670153 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 23 22:54:24.683910 systemd[1]: Reload requested from client PID 1312 ('systemctl') (unit ensure-sysext.service)... Nov 23 22:54:24.683930 systemd[1]: Reloading... Nov 23 22:54:24.696233 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 23 22:54:24.696283 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 23 22:54:24.696586 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 23 22:54:24.696846 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 23 22:54:24.697615 systemd-tmpfiles[1313]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 23 22:54:24.703048 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 23 22:54:24.703107 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Nov 23 22:54:24.710609 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:54:24.710627 systemd-tmpfiles[1313]: Skipping /boot Nov 23 22:54:24.722849 systemd-udevd[1314]: Using default interface naming scheme 'v255'. Nov 23 22:54:24.723181 systemd-tmpfiles[1313]: Detected autofs mount point /boot during canonicalization of boot. Nov 23 22:54:24.723195 systemd-tmpfiles[1313]: Skipping /boot Nov 23 22:54:24.792755 zram_generator::config[1347]: No configuration found. Nov 23 22:54:25.030747 kernel: mousedev: PS/2 mouse device common for all mice Nov 23 22:54:25.030978 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 23 22:54:25.031252 systemd[1]: Reloading finished in 346 ms. Nov 23 22:54:25.050510 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 23 22:54:25.060927 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 23 22:54:25.072165 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:54:25.075082 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 23 22:54:25.079369 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 23 22:54:25.085148 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 23 22:54:25.093710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 23 22:54:25.097226 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 23 22:54:25.105073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:25.107905 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:54:25.111921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:54:25.119820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 23 22:54:25.122012 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:25.122197 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:25.128857 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 23 22:54:25.132532 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:25.132752 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:25.132849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:25.136514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:25.144056 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 23 22:54:25.145647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:25.146104 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:25.154783 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 23 22:54:25.171014 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 23 22:54:25.174078 systemd[1]: Finished ensure-sysext.service. Nov 23 22:54:25.182948 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 23 22:54:25.194567 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 23 22:54:25.197018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:54:25.197214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:54:25.199562 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:54:25.201807 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:54:25.210456 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:54:25.210533 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:54:25.213850 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 23 22:54:25.227516 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 23 22:54:25.236817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 23 22:54:25.245017 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 23 22:54:25.245200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 23 22:54:25.247687 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 23 22:54:25.276691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 23 22:54:25.277975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 23 22:54:25.278028 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 23 22:54:25.278058 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 23 22:54:25.281855 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 23 22:54:25.289901 augenrules[1468]: No rules Nov 23 22:54:25.292099 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:54:25.295298 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:54:25.305879 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 23 22:54:25.307797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 23 22:54:25.315558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 23 22:54:25.332825 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 23 22:54:25.336089 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 23 22:54:25.336352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 23 22:54:25.340650 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 23 22:54:25.346087 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Nov 23 22:54:25.346182 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 23 22:54:25.346200 kernel: [drm] features: -context_init Nov 23 22:54:25.351769 kernel: [drm] number of scanouts: 1 Nov 23 22:54:25.351826 kernel: [drm] number of cap sets: 0 Nov 23 22:54:25.361757 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 23 22:54:25.367389 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 23 22:54:25.370151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 23 22:54:25.370382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 23 22:54:25.371378 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 23 22:54:25.391783 kernel: Console: switching to colour frame buffer device 160x50 Nov 23 22:54:25.406763 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 23 22:54:25.408934 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 23 22:54:25.452085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:25.461309 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 23 22:54:25.462769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:25.464630 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 23 22:54:25.561238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 23 22:54:25.610398 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 23 22:54:25.612116 systemd[1]: Reached target time-set.target - System Time Set. Nov 23 22:54:25.623927 systemd-networkd[1425]: lo: Link UP Nov 23 22:54:25.624289 systemd-networkd[1425]: lo: Gained carrier Nov 23 22:54:25.626386 systemd-networkd[1425]: Enumeration completed Nov 23 22:54:25.626644 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 23 22:54:25.627192 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:25.627270 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:25.628011 systemd-networkd[1425]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:25.628088 systemd-networkd[1425]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 23 22:54:25.628552 systemd-networkd[1425]: eth0: Link UP Nov 23 22:54:25.628802 systemd-networkd[1425]: eth0: Gained carrier Nov 23 22:54:25.628872 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:25.631918 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 23 22:54:25.633856 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 23 22:54:25.635326 systemd-networkd[1425]: eth1: Link UP Nov 23 22:54:25.636087 systemd-networkd[1425]: eth1: Gained carrier Nov 23 22:54:25.636186 systemd-networkd[1425]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 23 22:54:25.650341 systemd-resolved[1426]: Positive Trust Anchors: Nov 23 22:54:25.650361 systemd-resolved[1426]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 23 22:54:25.650398 systemd-resolved[1426]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 23 22:54:25.654070 systemd-resolved[1426]: Using system hostname 'ci-4459-1-2-5-0c65a92823'. Nov 23 22:54:25.655807 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 23 22:54:25.656519 systemd[1]: Reached target network.target - Network. Nov 23 22:54:25.656792 systemd-networkd[1425]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 23 22:54:25.657146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 23 22:54:25.658011 systemd[1]: Reached target sysinit.target - System Initialization. Nov 23 22:54:25.658629 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Nov 23 22:54:25.658836 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 23 22:54:25.659497 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 23 22:54:25.660405 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 23 22:54:25.661107 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 23 22:54:25.661822 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 23 22:54:25.662458 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 23 22:54:25.662495 systemd[1]: Reached target paths.target - Path Units. Nov 23 22:54:25.663024 systemd[1]: Reached target timers.target - Timer Units. Nov 23 22:54:25.665116 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 23 22:54:25.667106 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 23 22:54:25.670027 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 23 22:54:25.670884 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 23 22:54:25.671582 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 23 22:54:25.674356 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 23 22:54:25.675333 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 23 22:54:25.678773 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 23 22:54:25.679676 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 23 22:54:25.682181 systemd[1]: Reached target sockets.target - Socket Units. Nov 23 22:54:25.682824 systemd-networkd[1425]: eth0: DHCPv4 address 188.245.196.203/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 23 22:54:25.682930 systemd[1]: Reached target basic.target - Basic System. Nov 23 22:54:25.683515 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:54:25.683554 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 23 22:54:25.685045 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Nov 23 22:54:25.685837 systemd[1]: Starting containerd.service - containerd container runtime... Nov 23 22:54:25.687887 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 23 22:54:25.691020 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 23 22:54:25.694131 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 23 22:54:25.700879 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 23 22:54:25.703896 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 23 22:54:25.704646 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 23 22:54:25.708973 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 23 22:54:25.712008 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 23 22:54:25.715038 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 23 22:54:25.721312 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 23 22:54:25.727292 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 23 22:54:25.733637 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 23 22:54:25.738456 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 23 22:54:25.740012 jq[1519]: false Nov 23 22:54:25.739056 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 23 22:54:25.744364 systemd[1]: Starting update-engine.service - Update Engine... Nov 23 22:54:25.748247 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 23 22:54:25.751798 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 23 22:54:25.754053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 23 22:54:25.754789 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 23 22:54:25.788800 extend-filesystems[1521]: Found /dev/sda6 Nov 23 22:54:25.790096 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 23 22:54:25.796824 coreos-metadata[1516]: Nov 23 22:54:25.794 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 23 22:54:25.791454 (ntainerd)[1546]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 23 22:54:25.795089 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 23 22:54:25.803574 extend-filesystems[1521]: Found /dev/sda9 Nov 23 22:54:25.812639 jq[1531]: true Nov 23 22:54:25.812888 coreos-metadata[1516]: Nov 23 22:54:25.811 INFO Fetch successful Nov 23 22:54:25.812888 coreos-metadata[1516]: Nov 23 22:54:25.811 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 23 22:54:25.812888 coreos-metadata[1516]: Nov 23 22:54:25.811 INFO Fetch successful Nov 23 22:54:25.812960 extend-filesystems[1521]: Checking size of /dev/sda9 Nov 23 22:54:25.820407 systemd[1]: motdgen.service: Deactivated successfully. Nov 23 22:54:25.821793 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 23 22:54:25.841499 tar[1537]: linux-arm64/LICENSE Nov 23 22:54:25.841499 tar[1537]: linux-arm64/helm Nov 23 22:54:25.866489 dbus-daemon[1517]: [system] SELinux support is enabled Nov 23 22:54:25.867710 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 23 22:54:25.871719 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 23 22:54:25.871763 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 23 22:54:25.873835 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 23 22:54:25.873855 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 23 22:54:25.884072 extend-filesystems[1521]: Resized partition /dev/sda9 Nov 23 22:54:25.892009 extend-filesystems[1566]: resize2fs 1.47.3 (8-Jul-2025) Nov 23 22:54:25.894899 jq[1559]: true Nov 23 22:54:25.904778 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 23 22:54:25.910636 update_engine[1529]: I20251123 22:54:25.910380 1529 main.cc:92] Flatcar Update Engine starting Nov 23 22:54:25.927894 systemd[1]: Started update-engine.service - Update Engine. Nov 23 22:54:25.929820 update_engine[1529]: I20251123 22:54:25.928822 1529 update_check_scheduler.cc:74] Next update check in 7m44s Nov 23 22:54:25.934794 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 23 22:54:26.006118 systemd-logind[1528]: New seat seat0. Nov 23 22:54:26.012955 systemd-logind[1528]: Watching system buttons on /dev/input/event0 (Power Button) Nov 23 22:54:26.012999 systemd-logind[1528]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Nov 23 22:54:26.016117 systemd[1]: Started systemd-logind.service - User Login Management. Nov 23 22:54:26.020805 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 23 22:54:26.023919 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 23 22:54:26.082372 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:54:26.085267 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 23 22:54:26.091034 systemd[1]: Starting sshkeys.service... Nov 23 22:54:26.099758 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 23 22:54:26.123559 extend-filesystems[1566]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 23 22:54:26.123559 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 23 22:54:26.123559 extend-filesystems[1566]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 23 22:54:26.132115 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Nov 23 22:54:26.127571 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 23 22:54:26.127841 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 23 22:54:26.140012 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 23 22:54:26.144590 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 23 22:54:26.220023 containerd[1546]: time="2025-11-23T22:54:26Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 23 22:54:26.228514 containerd[1546]: time="2025-11-23T22:54:26.227090560Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 23 22:54:26.240513 coreos-metadata[1601]: Nov 23 22:54:26.240 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 23 22:54:26.241470 coreos-metadata[1601]: Nov 23 22:54:26.241 INFO Fetch successful Nov 23 22:54:26.246757 unknown[1601]: wrote ssh authorized keys file for user: core Nov 23 22:54:26.256425 containerd[1546]: time="2025-11-23T22:54:26.256373760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.84µs" Nov 23 22:54:26.256660 containerd[1546]: time="2025-11-23T22:54:26.256636440Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.256766480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.256946320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.256967680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257000880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257068200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257081640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257342160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257362400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257376840Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 23 22:54:26.257819 containerd[1546]: time="2025-11-23T22:54:26.257385320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 23 22:54:26.259242 containerd[1546]: time="2025-11-23T22:54:26.259199240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 23 22:54:26.260437 containerd[1546]: time="2025-11-23T22:54:26.260406600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:54:26.260677 containerd[1546]: time="2025-11-23T22:54:26.260656920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 23 22:54:26.260901 containerd[1546]: time="2025-11-23T22:54:26.260882080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 23 22:54:26.261108 containerd[1546]: time="2025-11-23T22:54:26.261089320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 23 22:54:26.263516 containerd[1546]: time="2025-11-23T22:54:26.263375840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 23 22:54:26.264005 containerd[1546]: time="2025-11-23T22:54:26.263922720Z" level=info msg="metadata content store policy set" policy=shared Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.268764720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.268938960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.268959040Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269022040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269039520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269050960Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269066440Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269080120Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269092760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269103160Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 23 22:54:26.270752 containerd[1546]: time="2025-11-23T22:54:26.269328640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.269360280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273185680Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273228560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273252400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273268760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273281840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273298280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273314120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273325560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273341440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273355600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273369960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273561280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273580200Z" level=info msg="Start snapshots syncer" Nov 23 22:54:26.273945 containerd[1546]: time="2025-11-23T22:54:26.273608240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 23 22:54:26.276535 containerd[1546]: time="2025-11-23T22:54:26.276476480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 23 22:54:26.276699 containerd[1546]: time="2025-11-23T22:54:26.276557320Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 23 22:54:26.276699 containerd[1546]: time="2025-11-23T22:54:26.276623680Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276797000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276829760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276843280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276860200Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276874040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276885920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276898000Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276927080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276939720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 23 22:54:26.276952 containerd[1546]: time="2025-11-23T22:54:26.276952240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.276991280Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277006200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277016560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277027760Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277036840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277053280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277065360Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 23 22:54:26.277158 containerd[1546]: time="2025-11-23T22:54:26.277159920Z" level=info msg="runtime interface created" Nov 23 22:54:26.277508 containerd[1546]: time="2025-11-23T22:54:26.277167040Z" level=info msg="created NRI interface" Nov 23 22:54:26.277508 containerd[1546]: time="2025-11-23T22:54:26.277176680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 23 22:54:26.277508 containerd[1546]: time="2025-11-23T22:54:26.277192280Z" level=info msg="Connect containerd service" Nov 23 22:54:26.277508 containerd[1546]: time="2025-11-23T22:54:26.277216680Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 23 22:54:26.280085 containerd[1546]: time="2025-11-23T22:54:26.279946560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:54:26.304752 update-ssh-keys[1610]: Updated "/home/core/.ssh/authorized_keys" Nov 23 22:54:26.307778 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 23 22:54:26.314760 systemd[1]: Finished sshkeys.service. Nov 23 22:54:26.405576 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483753320Z" level=info msg="Start subscribing containerd event" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483835320Z" level=info msg="Start recovering state" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483933400Z" level=info msg="Start event monitor" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483948640Z" level=info msg="Start cni network conf syncer for default" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483956160Z" level=info msg="Start streaming server" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483966160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483974520Z" level=info msg="runtime interface starting up..." Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483981240Z" level=info msg="starting plugins..." Nov 23 22:54:26.484251 containerd[1546]: time="2025-11-23T22:54:26.483994880Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 23 22:54:26.485061 containerd[1546]: time="2025-11-23T22:54:26.484926040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 23 22:54:26.485061 containerd[1546]: time="2025-11-23T22:54:26.485000960Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 23 22:54:26.485295 containerd[1546]: time="2025-11-23T22:54:26.485208320Z" level=info msg="containerd successfully booted in 0.265630s" Nov 23 22:54:26.485361 systemd[1]: Started containerd.service - containerd container runtime. Nov 23 22:54:26.541226 tar[1537]: linux-arm64/README.md Nov 23 22:54:26.552981 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 23 22:54:26.560080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 23 22:54:26.580330 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 23 22:54:26.584694 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 23 22:54:26.606286 systemd[1]: issuegen.service: Deactivated successfully. Nov 23 22:54:26.606551 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 23 22:54:26.610125 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 23 22:54:26.636001 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 23 22:54:26.640150 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 23 22:54:26.643022 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 23 22:54:26.644070 systemd[1]: Reached target getty.target - Login Prompts. Nov 23 22:54:27.218938 systemd-networkd[1425]: eth0: Gained IPv6LL Nov 23 22:54:27.219908 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Nov 23 22:54:27.223169 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 23 22:54:27.226257 systemd[1]: Reached target network-online.target - Network is Online. Nov 23 22:54:27.230563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:27.235016 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 23 22:54:27.278493 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 23 22:54:27.538886 systemd-networkd[1425]: eth1: Gained IPv6LL Nov 23 22:54:27.539687 systemd-timesyncd[1443]: Network configuration changed, trying to establish connection. Nov 23 22:54:28.039968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:28.041408 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 23 22:54:28.047429 systemd[1]: Startup finished in 2.404s (kernel) + 5.421s (initrd) + 5.065s (userspace) = 12.892s. Nov 23 22:54:28.052695 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:28.567987 kubelet[1665]: E1123 22:54:28.567906 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:28.571815 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:28.572024 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:28.572589 systemd[1]: kubelet.service: Consumed 872ms CPU time, 253.8M memory peak. Nov 23 22:54:38.823004 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 23 22:54:38.826193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:38.985174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:38.998800 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:39.045718 kubelet[1683]: E1123 22:54:39.045657 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:39.048931 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:39.049105 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:39.049473 systemd[1]: kubelet.service: Consumed 174ms CPU time, 107.2M memory peak. Nov 23 22:54:49.300358 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 23 22:54:49.303110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:49.473824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:49.482260 (kubelet)[1699]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:54:49.539583 kubelet[1699]: E1123 22:54:49.539522 1699 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:54:49.542615 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:54:49.542871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:54:49.543671 systemd[1]: kubelet.service: Consumed 179ms CPU time, 107.7M memory peak. Nov 23 22:54:55.293067 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 23 22:54:55.295953 systemd[1]: Started sshd@0-188.245.196.203:22-139.178.89.65:39590.service - OpenSSH per-connection server daemon (139.178.89.65:39590). Nov 23 22:54:56.293187 sshd[1708]: Accepted publickey for core from 139.178.89.65 port 39590 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:54:56.295940 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:54:56.304382 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 23 22:54:56.306954 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 23 22:54:56.315779 systemd-logind[1528]: New session 1 of user core. Nov 23 22:54:56.335747 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 23 22:54:56.340881 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 23 22:54:56.353064 (systemd)[1713]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 23 22:54:56.356418 systemd-logind[1528]: New session c1 of user core. Nov 23 22:54:56.483484 systemd[1713]: Queued start job for default target default.target. Nov 23 22:54:56.503651 systemd[1713]: Created slice app.slice - User Application Slice. Nov 23 22:54:56.503717 systemd[1713]: Reached target paths.target - Paths. Nov 23 22:54:56.504204 systemd[1713]: Reached target timers.target - Timers. Nov 23 22:54:56.506789 systemd[1713]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 23 22:54:56.532446 systemd[1713]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 23 22:54:56.532576 systemd[1713]: Reached target sockets.target - Sockets. Nov 23 22:54:56.532780 systemd[1713]: Reached target basic.target - Basic System. Nov 23 22:54:56.532877 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 23 22:54:56.533615 systemd[1713]: Reached target default.target - Main User Target. Nov 23 22:54:56.533663 systemd[1713]: Startup finished in 169ms. Nov 23 22:54:56.544151 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 23 22:54:57.235271 systemd[1]: Started sshd@1-188.245.196.203:22-139.178.89.65:39596.service - OpenSSH per-connection server daemon (139.178.89.65:39596). Nov 23 22:54:57.849428 systemd-timesyncd[1443]: Contacted time server 185.232.69.65:123 (2.flatcar.pool.ntp.org). Nov 23 22:54:57.849531 systemd-timesyncd[1443]: Initial clock synchronization to Sun 2025-11-23 22:54:57.736695 UTC. Nov 23 22:54:58.212904 sshd[1724]: Accepted publickey for core from 139.178.89.65 port 39596 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:54:58.216037 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:54:58.222615 systemd-logind[1528]: New session 2 of user core. Nov 23 22:54:58.233045 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 23 22:54:58.874692 sshd[1727]: Connection closed by 139.178.89.65 port 39596 Nov 23 22:54:58.875379 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Nov 23 22:54:58.880271 systemd[1]: sshd@1-188.245.196.203:22-139.178.89.65:39596.service: Deactivated successfully. Nov 23 22:54:58.882483 systemd[1]: session-2.scope: Deactivated successfully. Nov 23 22:54:58.883413 systemd-logind[1528]: Session 2 logged out. Waiting for processes to exit. Nov 23 22:54:58.884802 systemd-logind[1528]: Removed session 2. Nov 23 22:54:59.043178 systemd[1]: Started sshd@2-188.245.196.203:22-139.178.89.65:39600.service - OpenSSH per-connection server daemon (139.178.89.65:39600). Nov 23 22:54:59.794008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 23 22:54:59.797476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:54:59.958552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:54:59.967507 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:55:00.008766 sshd[1733]: Accepted publickey for core from 139.178.89.65 port 39600 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:00.009913 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:00.022670 systemd-logind[1528]: New session 3 of user core. Nov 23 22:55:00.025627 kubelet[1743]: E1123 22:55:00.025581 1743 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:55:00.026122 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 23 22:55:00.029003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:55:00.029149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:55:00.030175 systemd[1]: kubelet.service: Consumed 178ms CPU time, 105.8M memory peak. Nov 23 22:55:00.664822 sshd[1752]: Connection closed by 139.178.89.65 port 39600 Nov 23 22:55:00.665652 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:00.671523 systemd-logind[1528]: Session 3 logged out. Waiting for processes to exit. Nov 23 22:55:00.671888 systemd[1]: sshd@2-188.245.196.203:22-139.178.89.65:39600.service: Deactivated successfully. Nov 23 22:55:00.673783 systemd[1]: session-3.scope: Deactivated successfully. Nov 23 22:55:00.675579 systemd-logind[1528]: Removed session 3. Nov 23 22:55:00.835079 systemd[1]: Started sshd@3-188.245.196.203:22-139.178.89.65:60846.service - OpenSSH per-connection server daemon (139.178.89.65:60846). Nov 23 22:55:01.827779 sshd[1758]: Accepted publickey for core from 139.178.89.65 port 60846 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:01.831380 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:01.839794 systemd-logind[1528]: New session 4 of user core. Nov 23 22:55:01.846491 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 23 22:55:02.492292 sshd[1761]: Connection closed by 139.178.89.65 port 60846 Nov 23 22:55:02.492747 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:02.500044 systemd-logind[1528]: Session 4 logged out. Waiting for processes to exit. Nov 23 22:55:02.500561 systemd[1]: sshd@3-188.245.196.203:22-139.178.89.65:60846.service: Deactivated successfully. Nov 23 22:55:02.502797 systemd[1]: session-4.scope: Deactivated successfully. Nov 23 22:55:02.506427 systemd-logind[1528]: Removed session 4. Nov 23 22:55:02.663696 systemd[1]: Started sshd@4-188.245.196.203:22-139.178.89.65:60856.service - OpenSSH per-connection server daemon (139.178.89.65:60856). Nov 23 22:55:03.638966 sshd[1767]: Accepted publickey for core from 139.178.89.65 port 60856 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:03.641075 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:03.645907 systemd-logind[1528]: New session 5 of user core. Nov 23 22:55:03.658071 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 23 22:55:04.157207 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 23 22:55:04.157498 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:04.173447 sudo[1771]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:04.329425 sshd[1770]: Connection closed by 139.178.89.65 port 60856 Nov 23 22:55:04.330403 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:04.335672 systemd[1]: sshd@4-188.245.196.203:22-139.178.89.65:60856.service: Deactivated successfully. Nov 23 22:55:04.337902 systemd[1]: session-5.scope: Deactivated successfully. Nov 23 22:55:04.339694 systemd-logind[1528]: Session 5 logged out. Waiting for processes to exit. Nov 23 22:55:04.342027 systemd-logind[1528]: Removed session 5. Nov 23 22:55:04.499000 systemd[1]: Started sshd@5-188.245.196.203:22-139.178.89.65:60866.service - OpenSSH per-connection server daemon (139.178.89.65:60866). Nov 23 22:55:05.480312 sshd[1777]: Accepted publickey for core from 139.178.89.65 port 60866 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:05.483294 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:05.488923 systemd-logind[1528]: New session 6 of user core. Nov 23 22:55:05.495986 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 23 22:55:05.995087 sudo[1782]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 23 22:55:05.995405 sudo[1782]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:06.002689 sudo[1782]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:06.011578 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 23 22:55:06.011884 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:06.026381 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 23 22:55:06.076348 augenrules[1804]: No rules Nov 23 22:55:06.078093 systemd[1]: audit-rules.service: Deactivated successfully. Nov 23 22:55:06.078364 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 23 22:55:06.079983 sudo[1781]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:06.235784 sshd[1780]: Connection closed by 139.178.89.65 port 60866 Nov 23 22:55:06.236571 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:06.242984 systemd[1]: sshd@5-188.245.196.203:22-139.178.89.65:60866.service: Deactivated successfully. Nov 23 22:55:06.246804 systemd[1]: session-6.scope: Deactivated successfully. Nov 23 22:55:06.248380 systemd-logind[1528]: Session 6 logged out. Waiting for processes to exit. Nov 23 22:55:06.249915 systemd-logind[1528]: Removed session 6. Nov 23 22:55:06.412796 systemd[1]: Started sshd@6-188.245.196.203:22-139.178.89.65:60872.service - OpenSSH per-connection server daemon (139.178.89.65:60872). Nov 23 22:55:07.392785 sshd[1813]: Accepted publickey for core from 139.178.89.65 port 60872 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:55:07.395683 sshd-session[1813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:55:07.402931 systemd-logind[1528]: New session 7 of user core. Nov 23 22:55:07.410980 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 23 22:55:07.905820 sudo[1817]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 23 22:55:07.906120 sudo[1817]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 23 22:55:08.243417 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 23 22:55:08.265984 (dockerd)[1836]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 23 22:55:08.513774 dockerd[1836]: time="2025-11-23T22:55:08.511986118Z" level=info msg="Starting up" Nov 23 22:55:08.514857 dockerd[1836]: time="2025-11-23T22:55:08.514369904Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 23 22:55:08.530206 dockerd[1836]: time="2025-11-23T22:55:08.529746751Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 23 22:55:08.561279 systemd[1]: var-lib-docker-metacopy\x2dcheck1695076034-merged.mount: Deactivated successfully. Nov 23 22:55:08.573672 dockerd[1836]: time="2025-11-23T22:55:08.573618439Z" level=info msg="Loading containers: start." Nov 23 22:55:08.585756 kernel: Initializing XFRM netlink socket Nov 23 22:55:08.844033 systemd-networkd[1425]: docker0: Link UP Nov 23 22:55:08.849177 dockerd[1836]: time="2025-11-23T22:55:08.848814885Z" level=info msg="Loading containers: done." Nov 23 22:55:08.867410 dockerd[1836]: time="2025-11-23T22:55:08.867284683Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 23 22:55:08.867591 dockerd[1836]: time="2025-11-23T22:55:08.867471908Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 23 22:55:08.867639 dockerd[1836]: time="2025-11-23T22:55:08.867608082Z" level=info msg="Initializing buildkit" Nov 23 22:55:08.900302 dockerd[1836]: time="2025-11-23T22:55:08.900248288Z" level=info msg="Completed buildkit initialization" Nov 23 22:55:08.911076 dockerd[1836]: time="2025-11-23T22:55:08.910800733Z" level=info msg="Daemon has completed initialization" Nov 23 22:55:08.911076 dockerd[1836]: time="2025-11-23T22:55:08.910887052Z" level=info msg="API listen on /run/docker.sock" Nov 23 22:55:08.911803 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 23 22:55:09.952028 containerd[1546]: time="2025-11-23T22:55:09.951985858Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Nov 23 22:55:10.158612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 23 22:55:10.161057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:10.325308 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:10.335621 (kubelet)[2054]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:55:10.385411 kubelet[2054]: E1123 22:55:10.385356 2054 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:55:10.388582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:55:10.388805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:55:10.389581 systemd[1]: kubelet.service: Consumed 168ms CPU time, 104.1M memory peak. Nov 23 22:55:10.538461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701890023.mount: Deactivated successfully. Nov 23 22:55:11.156593 update_engine[1529]: I20251123 22:55:11.156526 1529 update_attempter.cc:509] Updating boot flags... Nov 23 22:55:11.711383 containerd[1546]: time="2025-11-23T22:55:11.711229019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:11.713402 containerd[1546]: time="2025-11-23T22:55:11.712886579Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=26432057" Nov 23 22:55:11.714593 containerd[1546]: time="2025-11-23T22:55:11.714548689Z" level=info msg="ImageCreate event name:\"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:11.718053 containerd[1546]: time="2025-11-23T22:55:11.718010126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:11.720393 containerd[1546]: time="2025-11-23T22:55:11.720285831Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"26428558\" in 1.768244738s" Nov 23 22:55:11.720493 containerd[1546]: time="2025-11-23T22:55:11.720394241Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:03aec5fd5841efdd990b8fe285e036fc1386e2f8851378ce2c9dfd1b331897ea\"" Nov 23 22:55:11.721308 containerd[1546]: time="2025-11-23T22:55:11.721253266Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Nov 23 22:55:12.784860 containerd[1546]: time="2025-11-23T22:55:12.784799307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:12.786162 containerd[1546]: time="2025-11-23T22:55:12.786123668Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=22618975" Nov 23 22:55:12.787965 containerd[1546]: time="2025-11-23T22:55:12.787552522Z" level=info msg="ImageCreate event name:\"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:12.792837 containerd[1546]: time="2025-11-23T22:55:12.792780956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:12.794246 containerd[1546]: time="2025-11-23T22:55:12.794181671Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"24203439\" in 1.072869866s" Nov 23 22:55:12.794246 containerd[1546]: time="2025-11-23T22:55:12.794233797Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:66490a6490dde2df4a78eba21320da67070ad88461899536880edb5301ec2ba3\"" Nov 23 22:55:12.795415 containerd[1546]: time="2025-11-23T22:55:12.795389325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Nov 23 22:55:13.760538 containerd[1546]: time="2025-11-23T22:55:13.760462042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:13.763495 containerd[1546]: time="2025-11-23T22:55:13.762918331Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=17618456" Nov 23 22:55:13.765165 containerd[1546]: time="2025-11-23T22:55:13.765098506Z" level=info msg="ImageCreate event name:\"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:13.769823 containerd[1546]: time="2025-11-23T22:55:13.769767347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:13.771488 containerd[1546]: time="2025-11-23T22:55:13.771439168Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"19202938\" in 975.923673ms" Nov 23 22:55:13.771488 containerd[1546]: time="2025-11-23T22:55:13.771481647Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fcf368a1abd0b48cff2fd3cca12fcc008aaf52eeab885656f11e7773c6a188a3\"" Nov 23 22:55:13.771975 containerd[1546]: time="2025-11-23T22:55:13.771922169Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Nov 23 22:55:14.744023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923214999.mount: Deactivated successfully. Nov 23 22:55:15.030823 containerd[1546]: time="2025-11-23T22:55:15.030677140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:15.033562 containerd[1546]: time="2025-11-23T22:55:15.033517925Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=27561825" Nov 23 22:55:15.035197 containerd[1546]: time="2025-11-23T22:55:15.035148712Z" level=info msg="ImageCreate event name:\"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:15.039713 containerd[1546]: time="2025-11-23T22:55:15.039180724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:15.039929 containerd[1546]: time="2025-11-23T22:55:15.039896722Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"27560818\" in 1.267919493s" Nov 23 22:55:15.040068 containerd[1546]: time="2025-11-23T22:55:15.040050618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:8b57c1f8bd2ddfa793889457b41e87132f192046e262b32ab0514f32d28be47d\"" Nov 23 22:55:15.040791 containerd[1546]: time="2025-11-23T22:55:15.040754075Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 23 22:55:15.623969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601529646.mount: Deactivated successfully. Nov 23 22:55:16.307192 containerd[1546]: time="2025-11-23T22:55:16.307094813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:16.309023 containerd[1546]: time="2025-11-23T22:55:16.308985846Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951714" Nov 23 22:55:16.310754 containerd[1546]: time="2025-11-23T22:55:16.310546259Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:16.313373 containerd[1546]: time="2025-11-23T22:55:16.313302070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:16.315715 containerd[1546]: time="2025-11-23T22:55:16.315256822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.274364986s" Nov 23 22:55:16.315715 containerd[1546]: time="2025-11-23T22:55:16.315306199Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 23 22:55:16.316343 containerd[1546]: time="2025-11-23T22:55:16.316299534Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 23 22:55:16.855251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956292714.mount: Deactivated successfully. Nov 23 22:55:16.862959 containerd[1546]: time="2025-11-23T22:55:16.862869227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:55:16.864107 containerd[1546]: time="2025-11-23T22:55:16.864068421Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268723" Nov 23 22:55:16.865386 containerd[1546]: time="2025-11-23T22:55:16.865332012Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:55:16.867556 containerd[1546]: time="2025-11-23T22:55:16.867506364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 23 22:55:16.868179 containerd[1546]: time="2025-11-23T22:55:16.868137121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 551.698364ms" Nov 23 22:55:16.868179 containerd[1546]: time="2025-11-23T22:55:16.868171797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 23 22:55:16.868847 containerd[1546]: time="2025-11-23T22:55:16.868754415Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 23 22:55:17.434218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount989756242.mount: Deactivated successfully. Nov 23 22:55:18.848498 containerd[1546]: time="2025-11-23T22:55:18.848426501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:18.849797 containerd[1546]: time="2025-11-23T22:55:18.849710969Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943239" Nov 23 22:55:18.850654 containerd[1546]: time="2025-11-23T22:55:18.850568893Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:18.853894 containerd[1546]: time="2025-11-23T22:55:18.853816369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:18.855225 containerd[1546]: time="2025-11-23T22:55:18.854967047Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.986173679s" Nov 23 22:55:18.855225 containerd[1546]: time="2025-11-23T22:55:18.855011244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 23 22:55:20.408350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 23 22:55:20.411947 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:20.560864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:20.570074 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 23 22:55:20.621954 kubelet[2295]: E1123 22:55:20.621903 2295 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 23 22:55:20.625126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 23 22:55:20.625409 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 23 22:55:20.625973 systemd[1]: kubelet.service: Consumed 165ms CPU time, 107M memory peak. Nov 23 22:55:23.317463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:23.318098 systemd[1]: kubelet.service: Consumed 165ms CPU time, 107M memory peak. Nov 23 22:55:23.320691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:23.355179 systemd[1]: Reload requested from client PID 2309 ('systemctl') (unit session-7.scope)... Nov 23 22:55:23.355195 systemd[1]: Reloading... Nov 23 22:55:23.498755 zram_generator::config[2365]: No configuration found. Nov 23 22:55:23.676940 systemd[1]: Reloading finished in 321 ms. Nov 23 22:55:23.737972 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 23 22:55:23.738063 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 23 22:55:23.738665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:23.738844 systemd[1]: kubelet.service: Consumed 112ms CPU time, 95M memory peak. Nov 23 22:55:23.741140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:24.005040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:24.018404 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:55:24.068319 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:24.068672 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:55:24.068719 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:24.068947 kubelet[2401]: I1123 22:55:24.068900 2401 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:55:24.775439 kubelet[2401]: I1123 22:55:24.775356 2401 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 22:55:24.775439 kubelet[2401]: I1123 22:55:24.775400 2401 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:55:24.776028 kubelet[2401]: I1123 22:55:24.775876 2401 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 22:55:24.823368 kubelet[2401]: E1123 22:55:24.822165 2401 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.196.203:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.196.203:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:55:24.828535 kubelet[2401]: I1123 22:55:24.827208 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:55:24.835495 kubelet[2401]: I1123 22:55:24.835420 2401 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:55:24.840920 kubelet[2401]: I1123 22:55:24.840891 2401 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:55:24.842253 kubelet[2401]: I1123 22:55:24.842174 2401 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:55:24.842460 kubelet[2401]: I1123 22:55:24.842228 2401 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-2-5-0c65a92823","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:55:24.842601 kubelet[2401]: I1123 22:55:24.842525 2401 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:55:24.842601 kubelet[2401]: I1123 22:55:24.842537 2401 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 22:55:24.842814 kubelet[2401]: I1123 22:55:24.842783 2401 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:24.846529 kubelet[2401]: I1123 22:55:24.846482 2401 kubelet.go:446] "Attempting to sync node with API server" Nov 23 22:55:24.846529 kubelet[2401]: I1123 22:55:24.846518 2401 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:55:24.846529 kubelet[2401]: I1123 22:55:24.846547 2401 kubelet.go:352] "Adding apiserver pod source" Nov 23 22:55:24.847934 kubelet[2401]: I1123 22:55:24.846558 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:55:24.861971 kubelet[2401]: W1123 22:55:24.861888 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.196.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.196.203:6443: connect: connection refused Nov 23 22:55:24.861971 kubelet[2401]: E1123 22:55:24.861973 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.196.203:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.196.203:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:55:24.862257 kubelet[2401]: W1123 22:55:24.861889 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.196.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-2-5-0c65a92823&limit=500&resourceVersion=0": dial tcp 188.245.196.203:6443: connect: connection refused Nov 23 22:55:24.862257 kubelet[2401]: E1123 22:55:24.862003 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.196.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-2-5-0c65a92823&limit=500&resourceVersion=0\": dial tcp 188.245.196.203:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:55:24.862512 kubelet[2401]: I1123 22:55:24.862462 2401 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:55:24.863358 kubelet[2401]: I1123 22:55:24.863332 2401 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 22:55:24.863510 kubelet[2401]: W1123 22:55:24.863493 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 23 22:55:24.865282 kubelet[2401]: I1123 22:55:24.865251 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:55:24.865379 kubelet[2401]: I1123 22:55:24.865299 2401 server.go:1287] "Started kubelet" Nov 23 22:55:24.866752 kubelet[2401]: I1123 22:55:24.865882 2401 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:55:24.867039 kubelet[2401]: I1123 22:55:24.867022 2401 server.go:479] "Adding debug handlers to kubelet server" Nov 23 22:55:24.869981 kubelet[2401]: I1123 22:55:24.869890 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:55:24.870427 kubelet[2401]: I1123 22:55:24.870407 2401 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:55:24.872180 kubelet[2401]: E1123 22:55:24.871380 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.196.203:6443/api/v1/namespaces/default/events\": dial tcp 188.245.196.203:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-2-5-0c65a92823.187ac4be1c38a3d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-2-5-0c65a92823,UID:ci-4459-1-2-5-0c65a92823,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-2-5-0c65a92823,},FirstTimestamp:2025-11-23 22:55:24.865274834 +0000 UTC m=+0.840181037,LastTimestamp:2025-11-23 22:55:24.865274834 +0000 UTC m=+0.840181037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-2-5-0c65a92823,}" Nov 23 22:55:24.875717 kubelet[2401]: I1123 22:55:24.875693 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:55:24.882458 kubelet[2401]: I1123 22:55:24.882409 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:55:24.884343 kubelet[2401]: E1123 22:55:24.884306 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-2-5-0c65a92823\" not found" Nov 23 22:55:24.884474 kubelet[2401]: I1123 22:55:24.884370 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:55:24.884565 kubelet[2401]: I1123 22:55:24.884545 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:55:24.884705 kubelet[2401]: I1123 22:55:24.884629 2401 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:55:24.886299 kubelet[2401]: W1123 22:55:24.886238 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.196.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.196.203:6443: connect: connection refused Nov 23 22:55:24.886389 kubelet[2401]: E1123 22:55:24.886320 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.196.203:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.196.203:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:55:24.886606 kubelet[2401]: I1123 22:55:24.886577 2401 factory.go:221] Registration of the systemd container factory successfully Nov 23 22:55:24.886683 kubelet[2401]: I1123 22:55:24.886668 2401 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:55:24.887048 kubelet[2401]: E1123 22:55:24.886980 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.196.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-2-5-0c65a92823?timeout=10s\": dial tcp 188.245.196.203:6443: connect: connection refused" interval="200ms" Nov 23 22:55:24.889390 kubelet[2401]: E1123 22:55:24.889346 2401 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:55:24.889948 kubelet[2401]: I1123 22:55:24.889906 2401 factory.go:221] Registration of the containerd container factory successfully Nov 23 22:55:24.908499 kubelet[2401]: I1123 22:55:24.908384 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 22:55:24.910100 kubelet[2401]: I1123 22:55:24.910068 2401 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 22:55:24.910525 kubelet[2401]: I1123 22:55:24.910239 2401 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 22:55:24.910525 kubelet[2401]: I1123 22:55:24.910271 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:55:24.910525 kubelet[2401]: I1123 22:55:24.910279 2401 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 22:55:24.910525 kubelet[2401]: E1123 22:55:24.910328 2401 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:55:24.919431 kubelet[2401]: W1123 22:55:24.919361 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.196.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.196.203:6443: connect: connection refused Nov 23 22:55:24.919639 kubelet[2401]: E1123 22:55:24.919594 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.196.203:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.196.203:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:55:24.923658 kubelet[2401]: I1123 22:55:24.923609 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:55:24.923952 kubelet[2401]: I1123 22:55:24.923642 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:55:24.923952 kubelet[2401]: I1123 22:55:24.923835 2401 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:24.931247 kubelet[2401]: I1123 22:55:24.930895 2401 policy_none.go:49] "None policy: Start" Nov 23 22:55:24.931247 kubelet[2401]: I1123 22:55:24.930930 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:55:24.931247 kubelet[2401]: I1123 22:55:24.930947 2401 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:55:24.940587 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 23 22:55:24.960097 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 23 22:55:24.965496 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 23 22:55:24.985834 kubelet[2401]: E1123 22:55:24.984813 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-2-5-0c65a92823\" not found" Nov 23 22:55:24.985834 kubelet[2401]: I1123 22:55:24.984839 2401 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 22:55:24.985834 kubelet[2401]: I1123 22:55:24.985222 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:55:24.985834 kubelet[2401]: I1123 22:55:24.985242 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:55:24.987976 kubelet[2401]: I1123 22:55:24.987955 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:55:24.989504 kubelet[2401]: E1123 22:55:24.989470 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:55:24.989893 kubelet[2401]: E1123 22:55:24.989874 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-2-5-0c65a92823\" not found" Nov 23 22:55:25.025884 systemd[1]: Created slice kubepods-burstable-pod1d5156d6dac89e68660ee4679c4d3dfe.slice - libcontainer container kubepods-burstable-pod1d5156d6dac89e68660ee4679c4d3dfe.slice. Nov 23 22:55:25.034101 kubelet[2401]: E1123 22:55:25.033971 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.038237 systemd[1]: Created slice kubepods-burstable-podce8088da1dc90909c4ee3945fc381862.slice - libcontainer container kubepods-burstable-podce8088da1dc90909c4ee3945fc381862.slice. Nov 23 22:55:25.052592 kubelet[2401]: E1123 22:55:25.052505 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.056465 systemd[1]: Created slice kubepods-burstable-podaf84ed939b4cc7bccc296f503cbae04b.slice - libcontainer container kubepods-burstable-podaf84ed939b4cc7bccc296f503cbae04b.slice. Nov 23 22:55:25.059042 kubelet[2401]: E1123 22:55:25.059013 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.088500 kubelet[2401]: E1123 22:55:25.088407 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.196.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-2-5-0c65a92823?timeout=10s\": dial tcp 188.245.196.203:6443: connect: connection refused" interval="400ms" Nov 23 22:55:25.090255 kubelet[2401]: I1123 22:55:25.090164 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.091061 kubelet[2401]: E1123 22:55:25.090981 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.196.203:6443/api/v1/nodes\": dial tcp 188.245.196.203:6443: connect: connection refused" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.185929 kubelet[2401]: I1123 22:55:25.185623 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-ca-certs\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.185929 kubelet[2401]: I1123 22:55:25.185677 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.185929 kubelet[2401]: I1123 22:55:25.185705 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.185929 kubelet[2401]: I1123 22:55:25.185756 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af84ed939b4cc7bccc296f503cbae04b-kubeconfig\") pod \"kube-scheduler-ci-4459-1-2-5-0c65a92823\" (UID: \"af84ed939b4cc7bccc296f503cbae04b\") " pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.185929 kubelet[2401]: I1123 22:55:25.185788 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5156d6dac89e68660ee4679c4d3dfe-k8s-certs\") pod \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" (UID: \"1d5156d6dac89e68660ee4679c4d3dfe\") " pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.186481 kubelet[2401]: I1123 22:55:25.185812 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5156d6dac89e68660ee4679c4d3dfe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" (UID: \"1d5156d6dac89e68660ee4679c4d3dfe\") " pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.186481 kubelet[2401]: I1123 22:55:25.185832 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.186481 kubelet[2401]: I1123 22:55:25.185852 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.186481 kubelet[2401]: I1123 22:55:25.185874 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5156d6dac89e68660ee4679c4d3dfe-ca-certs\") pod \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" (UID: \"1d5156d6dac89e68660ee4679c4d3dfe\") " pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.294816 kubelet[2401]: I1123 22:55:25.294564 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.296055 kubelet[2401]: E1123 22:55:25.296013 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.196.203:6443/api/v1/nodes\": dial tcp 188.245.196.203:6443: connect: connection refused" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.337073 containerd[1546]: time="2025-11-23T22:55:25.336966555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-2-5-0c65a92823,Uid:1d5156d6dac89e68660ee4679c4d3dfe,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:25.353979 containerd[1546]: time="2025-11-23T22:55:25.353925229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-2-5-0c65a92823,Uid:ce8088da1dc90909c4ee3945fc381862,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:25.363874 containerd[1546]: time="2025-11-23T22:55:25.362622143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-2-5-0c65a92823,Uid:af84ed939b4cc7bccc296f503cbae04b,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:25.372676 containerd[1546]: time="2025-11-23T22:55:25.371956253Z" level=info msg="connecting to shim 8ce92f5930a81fd39f1f237210836bf07e0c17629f16c849c5893c33cd68f47d" address="unix:///run/containerd/s/9f9683589a362250af3a30e38680ca0d0760143b6cda229a36bee939d1932e2a" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:25.411244 containerd[1546]: time="2025-11-23T22:55:25.411122514Z" level=info msg="connecting to shim c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3" address="unix:///run/containerd/s/88591a1bc5b5a0e2dfe546f4fed0529ed42990add48627894fd508f58c638816" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:25.422522 systemd[1]: Started cri-containerd-8ce92f5930a81fd39f1f237210836bf07e0c17629f16c849c5893c33cd68f47d.scope - libcontainer container 8ce92f5930a81fd39f1f237210836bf07e0c17629f16c849c5893c33cd68f47d. Nov 23 22:55:25.431099 containerd[1546]: time="2025-11-23T22:55:25.431047734Z" level=info msg="connecting to shim 6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a" address="unix:///run/containerd/s/008376b6c1335a935f3ce29549add5d72af74886aa1e435b7ea88a47ed4dc656" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:25.453953 systemd[1]: Started cri-containerd-c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3.scope - libcontainer container c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3. Nov 23 22:55:25.471977 systemd[1]: Started cri-containerd-6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a.scope - libcontainer container 6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a. Nov 23 22:55:25.489441 kubelet[2401]: E1123 22:55:25.489393 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.196.203:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-2-5-0c65a92823?timeout=10s\": dial tcp 188.245.196.203:6443: connect: connection refused" interval="800ms" Nov 23 22:55:25.533517 containerd[1546]: time="2025-11-23T22:55:25.533396112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-2-5-0c65a92823,Uid:1d5156d6dac89e68660ee4679c4d3dfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ce92f5930a81fd39f1f237210836bf07e0c17629f16c849c5893c33cd68f47d\"" Nov 23 22:55:25.539436 containerd[1546]: time="2025-11-23T22:55:25.539271585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-2-5-0c65a92823,Uid:ce8088da1dc90909c4ee3945fc381862,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3\"" Nov 23 22:55:25.541401 containerd[1546]: time="2025-11-23T22:55:25.541349110Z" level=info msg="CreateContainer within sandbox \"8ce92f5930a81fd39f1f237210836bf07e0c17629f16c849c5893c33cd68f47d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 23 22:55:25.544764 containerd[1546]: time="2025-11-23T22:55:25.544415937Z" level=info msg="CreateContainer within sandbox \"c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 23 22:55:25.548798 containerd[1546]: time="2025-11-23T22:55:25.548659794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-2-5-0c65a92823,Uid:af84ed939b4cc7bccc296f503cbae04b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a\"" Nov 23 22:55:25.553417 containerd[1546]: time="2025-11-23T22:55:25.553363355Z" level=info msg="CreateContainer within sandbox \"6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 23 22:55:25.560069 containerd[1546]: time="2025-11-23T22:55:25.560025288Z" level=info msg="Container 2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:25.563164 containerd[1546]: time="2025-11-23T22:55:25.563090235Z" level=info msg="Container ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:25.565659 containerd[1546]: time="2025-11-23T22:55:25.565207186Z" level=info msg="Container 501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:25.570495 containerd[1546]: time="2025-11-23T22:55:25.570450660Z" level=info msg="CreateContainer within sandbox \"c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df\"" Nov 23 22:55:25.571451 containerd[1546]: time="2025-11-23T22:55:25.571380345Z" level=info msg="StartContainer for \"2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df\"" Nov 23 22:55:25.573810 containerd[1546]: time="2025-11-23T22:55:25.573758355Z" level=info msg="CreateContainer within sandbox \"8ce92f5930a81fd39f1f237210836bf07e0c17629f16c849c5893c33cd68f47d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310\"" Nov 23 22:55:25.574137 containerd[1546]: time="2025-11-23T22:55:25.574096906Z" level=info msg="connecting to shim 2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df" address="unix:///run/containerd/s/88591a1bc5b5a0e2dfe546f4fed0529ed42990add48627894fd508f58c638816" protocol=ttrpc version=3 Nov 23 22:55:25.575659 containerd[1546]: time="2025-11-23T22:55:25.575617844Z" level=info msg="StartContainer for \"ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310\"" Nov 23 22:55:25.579130 containerd[1546]: time="2025-11-23T22:55:25.579091476Z" level=info msg="CreateContainer within sandbox \"6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658\"" Nov 23 22:55:25.580367 containerd[1546]: time="2025-11-23T22:55:25.580336280Z" level=info msg="StartContainer for \"501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658\"" Nov 23 22:55:25.581137 containerd[1546]: time="2025-11-23T22:55:25.581033013Z" level=info msg="connecting to shim ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310" address="unix:///run/containerd/s/9f9683589a362250af3a30e38680ca0d0760143b6cda229a36bee939d1932e2a" protocol=ttrpc version=3 Nov 23 22:55:25.582062 containerd[1546]: time="2025-11-23T22:55:25.582027393Z" level=info msg="connecting to shim 501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658" address="unix:///run/containerd/s/008376b6c1335a935f3ce29549add5d72af74886aa1e435b7ea88a47ed4dc656" protocol=ttrpc version=3 Nov 23 22:55:25.607405 systemd[1]: Started cri-containerd-2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df.scope - libcontainer container 2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df. Nov 23 22:55:25.615966 systemd[1]: Started cri-containerd-501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658.scope - libcontainer container 501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658. Nov 23 22:55:25.627188 systemd[1]: Started cri-containerd-ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310.scope - libcontainer container ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310. Nov 23 22:55:25.685981 containerd[1546]: time="2025-11-23T22:55:25.685889192Z" level=info msg="StartContainer for \"2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df\" returns successfully" Nov 23 22:55:25.700565 kubelet[2401]: I1123 22:55:25.700088 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.700871 kubelet[2401]: E1123 22:55:25.700821 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.196.203:6443/api/v1/nodes\": dial tcp 188.245.196.203:6443: connect: connection refused" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.716190 containerd[1546]: time="2025-11-23T22:55:25.715972447Z" level=info msg="StartContainer for \"501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658\" returns successfully" Nov 23 22:55:25.721712 containerd[1546]: time="2025-11-23T22:55:25.721077294Z" level=info msg="StartContainer for \"ba1175ffb2844bb265b19cc1080cf8b95e87df73be38ddd4d95a444586221310\" returns successfully" Nov 23 22:55:25.836690 kubelet[2401]: W1123 22:55:25.836497 2401 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.196.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-2-5-0c65a92823&limit=500&resourceVersion=0": dial tcp 188.245.196.203:6443: connect: connection refused Nov 23 22:55:25.836690 kubelet[2401]: E1123 22:55:25.836574 2401 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.196.203:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-2-5-0c65a92823&limit=500&resourceVersion=0\": dial tcp 188.245.196.203:6443: connect: connection refused" logger="UnhandledError" Nov 23 22:55:25.933053 kubelet[2401]: E1123 22:55:25.931236 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.934870 kubelet[2401]: E1123 22:55:25.934838 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:25.937862 kubelet[2401]: E1123 22:55:25.937822 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:26.503355 kubelet[2401]: I1123 22:55:26.503305 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:26.941507 kubelet[2401]: E1123 22:55:26.941304 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:26.943772 kubelet[2401]: E1123 22:55:26.943661 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:27.945229 kubelet[2401]: E1123 22:55:27.945067 2401 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.464788 kubelet[2401]: E1123 22:55:28.464739 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-2-5-0c65a92823\" not found" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.582684 kubelet[2401]: I1123 22:55:28.582617 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.586749 kubelet[2401]: I1123 22:55:28.586670 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.652941 kubelet[2401]: E1123 22:55:28.652882 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.652941 kubelet[2401]: I1123 22:55:28.652924 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.656265 kubelet[2401]: E1123 22:55:28.656211 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.656412 kubelet[2401]: I1123 22:55:28.656275 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.658490 kubelet[2401]: E1123 22:55:28.658430 2401 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-2-5-0c65a92823\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:28.855911 kubelet[2401]: I1123 22:55:28.854720 2401 apiserver.go:52] "Watching apiserver" Nov 23 22:55:28.885592 kubelet[2401]: I1123 22:55:28.885537 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:55:29.915264 kubelet[2401]: I1123 22:55:29.914835 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:30.966966 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-7.scope)... Nov 23 22:55:30.966984 systemd[1]: Reloading... Nov 23 22:55:30.982910 kubelet[2401]: I1123 22:55:30.982881 2401 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.063224 zram_generator::config[2718]: No configuration found. Nov 23 22:55:31.262297 systemd[1]: Reloading finished in 294 ms. Nov 23 22:55:31.300666 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:31.301485 kubelet[2401]: I1123 22:55:31.301432 2401 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:55:31.321135 systemd[1]: kubelet.service: Deactivated successfully. Nov 23 22:55:31.322910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:31.323072 systemd[1]: kubelet.service: Consumed 1.362s CPU time, 128.4M memory peak. Nov 23 22:55:31.326568 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 23 22:55:31.494865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 23 22:55:31.507566 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 23 22:55:31.571931 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:31.571931 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 23 22:55:31.571931 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 23 22:55:31.573026 kubelet[2763]: I1123 22:55:31.572910 2763 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 23 22:55:31.583307 kubelet[2763]: I1123 22:55:31.583193 2763 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 23 22:55:31.583307 kubelet[2763]: I1123 22:55:31.583230 2763 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 23 22:55:31.583649 kubelet[2763]: I1123 22:55:31.583535 2763 server.go:954] "Client rotation is on, will bootstrap in background" Nov 23 22:55:31.585493 kubelet[2763]: I1123 22:55:31.585442 2763 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 23 22:55:31.588281 kubelet[2763]: I1123 22:55:31.588242 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 23 22:55:31.596996 kubelet[2763]: I1123 22:55:31.596967 2763 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 23 22:55:31.600479 kubelet[2763]: I1123 22:55:31.600443 2763 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 23 22:55:31.600912 kubelet[2763]: I1123 22:55:31.600868 2763 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 23 22:55:31.601542 kubelet[2763]: I1123 22:55:31.600915 2763 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-2-5-0c65a92823","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 23 22:55:31.601898 kubelet[2763]: I1123 22:55:31.601877 2763 topology_manager.go:138] "Creating topology manager with none policy" Nov 23 22:55:31.602052 kubelet[2763]: I1123 22:55:31.601988 2763 container_manager_linux.go:304] "Creating device plugin manager" Nov 23 22:55:31.602111 kubelet[2763]: I1123 22:55:31.602102 2763 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:31.602340 kubelet[2763]: I1123 22:55:31.602327 2763 kubelet.go:446] "Attempting to sync node with API server" Nov 23 22:55:31.602434 kubelet[2763]: I1123 22:55:31.602423 2763 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 23 22:55:31.602565 kubelet[2763]: I1123 22:55:31.602508 2763 kubelet.go:352] "Adding apiserver pod source" Nov 23 22:55:31.602565 kubelet[2763]: I1123 22:55:31.602525 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 23 22:55:31.608347 kubelet[2763]: I1123 22:55:31.608100 2763 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 23 22:55:31.610483 kubelet[2763]: I1123 22:55:31.610378 2763 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 23 22:55:31.615643 kubelet[2763]: I1123 22:55:31.615619 2763 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 23 22:55:31.615765 kubelet[2763]: I1123 22:55:31.615662 2763 server.go:1287] "Started kubelet" Nov 23 22:55:31.618730 kubelet[2763]: I1123 22:55:31.618356 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 23 22:55:31.624237 kubelet[2763]: I1123 22:55:31.623893 2763 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 23 22:55:31.625516 kubelet[2763]: I1123 22:55:31.625161 2763 server.go:479] "Adding debug handlers to kubelet server" Nov 23 22:55:31.626634 kubelet[2763]: I1123 22:55:31.626106 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 23 22:55:31.626634 kubelet[2763]: I1123 22:55:31.626308 2763 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 23 22:55:31.626634 kubelet[2763]: I1123 22:55:31.626555 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 23 22:55:31.631869 kubelet[2763]: I1123 22:55:31.631551 2763 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 23 22:55:31.632808 kubelet[2763]: I1123 22:55:31.632415 2763 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 23 22:55:31.635761 kubelet[2763]: I1123 22:55:31.635478 2763 reconciler.go:26] "Reconciler: start to sync state" Nov 23 22:55:31.636644 kubelet[2763]: E1123 22:55:31.636208 2763 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 23 22:55:31.637756 kubelet[2763]: I1123 22:55:31.636956 2763 factory.go:221] Registration of the systemd container factory successfully Nov 23 22:55:31.638050 kubelet[2763]: I1123 22:55:31.638018 2763 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 23 22:55:31.639759 kubelet[2763]: I1123 22:55:31.638399 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 23 22:55:31.640267 kubelet[2763]: I1123 22:55:31.640217 2763 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 23 22:55:31.640418 kubelet[2763]: I1123 22:55:31.640248 2763 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 23 22:55:31.640460 kubelet[2763]: I1123 22:55:31.640430 2763 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 23 22:55:31.640460 kubelet[2763]: I1123 22:55:31.640440 2763 kubelet.go:2382] "Starting kubelet main sync loop" Nov 23 22:55:31.640517 kubelet[2763]: E1123 22:55:31.640498 2763 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 23 22:55:31.658026 kubelet[2763]: I1123 22:55:31.657605 2763 factory.go:221] Registration of the containerd container factory successfully Nov 23 22:55:31.732636 kubelet[2763]: I1123 22:55:31.732606 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 23 22:55:31.732930 kubelet[2763]: I1123 22:55:31.732852 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 23 22:55:31.732930 kubelet[2763]: I1123 22:55:31.732902 2763 state_mem.go:36] "Initialized new in-memory state store" Nov 23 22:55:31.733706 kubelet[2763]: I1123 22:55:31.733357 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 23 22:55:31.733872 kubelet[2763]: I1123 22:55:31.733835 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 23 22:55:31.733987 kubelet[2763]: I1123 22:55:31.733921 2763 policy_none.go:49] "None policy: Start" Nov 23 22:55:31.733987 kubelet[2763]: I1123 22:55:31.733937 2763 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 23 22:55:31.733987 kubelet[2763]: I1123 22:55:31.733955 2763 state_mem.go:35] "Initializing new in-memory state store" Nov 23 22:55:31.734375 kubelet[2763]: I1123 22:55:31.734252 2763 state_mem.go:75] "Updated machine memory state" Nov 23 22:55:31.741026 kubelet[2763]: E1123 22:55:31.740984 2763 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 23 22:55:31.742511 kubelet[2763]: I1123 22:55:31.742484 2763 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 23 22:55:31.744484 kubelet[2763]: I1123 22:55:31.743291 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 23 22:55:31.744484 kubelet[2763]: I1123 22:55:31.743309 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 23 22:55:31.745931 kubelet[2763]: I1123 22:55:31.745907 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 23 22:55:31.749642 kubelet[2763]: E1123 22:55:31.749599 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 23 22:55:31.858268 kubelet[2763]: I1123 22:55:31.857796 2763 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.874190 kubelet[2763]: I1123 22:55:31.873815 2763 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.874190 kubelet[2763]: I1123 22:55:31.873911 2763 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.941613 kubelet[2763]: I1123 22:55:31.941536 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.943495 kubelet[2763]: I1123 22:55:31.942917 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.943495 kubelet[2763]: I1123 22:55:31.943244 2763 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.955614 kubelet[2763]: E1123 22:55:31.955549 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-2-5-0c65a92823\" already exists" pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:31.955977 kubelet[2763]: E1123 22:55:31.955949 2763 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" already exists" pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038588 kubelet[2763]: I1123 22:55:32.038536 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d5156d6dac89e68660ee4679c4d3dfe-ca-certs\") pod \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" (UID: \"1d5156d6dac89e68660ee4679c4d3dfe\") " pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038745 kubelet[2763]: I1123 22:55:32.038596 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038745 kubelet[2763]: I1123 22:55:32.038631 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d5156d6dac89e68660ee4679c4d3dfe-k8s-certs\") pod \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" (UID: \"1d5156d6dac89e68660ee4679c4d3dfe\") " pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038745 kubelet[2763]: I1123 22:55:32.038658 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d5156d6dac89e68660ee4679c4d3dfe-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-2-5-0c65a92823\" (UID: \"1d5156d6dac89e68660ee4679c4d3dfe\") " pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038745 kubelet[2763]: I1123 22:55:32.038686 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-ca-certs\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038745 kubelet[2763]: I1123 22:55:32.038708 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038910 kubelet[2763]: I1123 22:55:32.038756 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038910 kubelet[2763]: I1123 22:55:32.038783 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce8088da1dc90909c4ee3945fc381862-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-2-5-0c65a92823\" (UID: \"ce8088da1dc90909c4ee3945fc381862\") " pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.038910 kubelet[2763]: I1123 22:55:32.038819 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/af84ed939b4cc7bccc296f503cbae04b-kubeconfig\") pod \"kube-scheduler-ci-4459-1-2-5-0c65a92823\" (UID: \"af84ed939b4cc7bccc296f503cbae04b\") " pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" Nov 23 22:55:32.616068 kubelet[2763]: I1123 22:55:32.615432 2763 apiserver.go:52] "Watching apiserver" Nov 23 22:55:32.634421 kubelet[2763]: I1123 22:55:32.634365 2763 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 23 22:55:32.718072 kubelet[2763]: I1123 22:55:32.717911 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-2-5-0c65a92823" podStartSLOduration=3.717893417 podStartE2EDuration="3.717893417s" podCreationTimestamp="2025-11-23 22:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:32.717849218 +0000 UTC m=+1.203967529" watchObservedRunningTime="2025-11-23 22:55:32.717893417 +0000 UTC m=+1.204011728" Nov 23 22:55:32.742265 kubelet[2763]: I1123 22:55:32.742194 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-2-5-0c65a92823" podStartSLOduration=2.741830686 podStartE2EDuration="2.741830686s" podCreationTimestamp="2025-11-23 22:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:32.741684489 +0000 UTC m=+1.227802800" watchObservedRunningTime="2025-11-23 22:55:32.741830686 +0000 UTC m=+1.227948997" Nov 23 22:55:32.782624 kubelet[2763]: I1123 22:55:32.781878 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-2-5-0c65a92823" podStartSLOduration=1.781863703 podStartE2EDuration="1.781863703s" podCreationTimestamp="2025-11-23 22:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:32.760163657 +0000 UTC m=+1.246282008" watchObservedRunningTime="2025-11-23 22:55:32.781863703 +0000 UTC m=+1.267982014" Nov 23 22:55:36.309232 kubelet[2763]: I1123 22:55:36.309030 2763 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 23 22:55:36.310508 kubelet[2763]: I1123 22:55:36.310344 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 23 22:55:36.310576 containerd[1546]: time="2025-11-23T22:55:36.310071376Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 23 22:55:37.308711 systemd[1]: Created slice kubepods-besteffort-podcb0416f2_30e3_4c48_83ef_085fc93d8542.slice - libcontainer container kubepods-besteffort-podcb0416f2_30e3_4c48_83ef_085fc93d8542.slice. Nov 23 22:55:37.373748 kubelet[2763]: I1123 22:55:37.373617 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cb0416f2-30e3-4c48-83ef-085fc93d8542-kube-proxy\") pod \"kube-proxy-tb6xn\" (UID: \"cb0416f2-30e3-4c48-83ef-085fc93d8542\") " pod="kube-system/kube-proxy-tb6xn" Nov 23 22:55:37.373748 kubelet[2763]: I1123 22:55:37.373700 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb0416f2-30e3-4c48-83ef-085fc93d8542-lib-modules\") pod \"kube-proxy-tb6xn\" (UID: \"cb0416f2-30e3-4c48-83ef-085fc93d8542\") " pod="kube-system/kube-proxy-tb6xn" Nov 23 22:55:37.375122 kubelet[2763]: I1123 22:55:37.375012 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7vkj\" (UniqueName: \"kubernetes.io/projected/cb0416f2-30e3-4c48-83ef-085fc93d8542-kube-api-access-h7vkj\") pod \"kube-proxy-tb6xn\" (UID: \"cb0416f2-30e3-4c48-83ef-085fc93d8542\") " pod="kube-system/kube-proxy-tb6xn" Nov 23 22:55:37.375122 kubelet[2763]: I1123 22:55:37.375085 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb0416f2-30e3-4c48-83ef-085fc93d8542-xtables-lock\") pod \"kube-proxy-tb6xn\" (UID: \"cb0416f2-30e3-4c48-83ef-085fc93d8542\") " pod="kube-system/kube-proxy-tb6xn" Nov 23 22:55:37.442503 systemd[1]: Created slice kubepods-besteffort-podbd527f36_7519_4439_9e9c_5324af41751d.slice - libcontainer container kubepods-besteffort-podbd527f36_7519_4439_9e9c_5324af41751d.slice. Nov 23 22:55:37.476116 kubelet[2763]: I1123 22:55:37.476041 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzf7n\" (UniqueName: \"kubernetes.io/projected/bd527f36-7519-4439-9e9c-5324af41751d-kube-api-access-pzf7n\") pod \"tigera-operator-7dcd859c48-dzs5d\" (UID: \"bd527f36-7519-4439-9e9c-5324af41751d\") " pod="tigera-operator/tigera-operator-7dcd859c48-dzs5d" Nov 23 22:55:37.476295 kubelet[2763]: I1123 22:55:37.476159 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd527f36-7519-4439-9e9c-5324af41751d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dzs5d\" (UID: \"bd527f36-7519-4439-9e9c-5324af41751d\") " pod="tigera-operator/tigera-operator-7dcd859c48-dzs5d" Nov 23 22:55:37.618410 containerd[1546]: time="2025-11-23T22:55:37.618266009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb6xn,Uid:cb0416f2-30e3-4c48-83ef-085fc93d8542,Namespace:kube-system,Attempt:0,}" Nov 23 22:55:37.644493 containerd[1546]: time="2025-11-23T22:55:37.644175707Z" level=info msg="connecting to shim 2f2906da7f167b8f9a06aa5b28e205808026ee8c189579e802686d4f0ed7e2bd" address="unix:///run/containerd/s/99aedd761cbcb1fc6cb9d424d56619310bc5fd4509988955c0a9c5ec43ed0e0b" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:37.669061 systemd[1]: Started cri-containerd-2f2906da7f167b8f9a06aa5b28e205808026ee8c189579e802686d4f0ed7e2bd.scope - libcontainer container 2f2906da7f167b8f9a06aa5b28e205808026ee8c189579e802686d4f0ed7e2bd. Nov 23 22:55:37.700829 containerd[1546]: time="2025-11-23T22:55:37.700675493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb6xn,Uid:cb0416f2-30e3-4c48-83ef-085fc93d8542,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f2906da7f167b8f9a06aa5b28e205808026ee8c189579e802686d4f0ed7e2bd\"" Nov 23 22:55:37.705025 containerd[1546]: time="2025-11-23T22:55:37.704971570Z" level=info msg="CreateContainer within sandbox \"2f2906da7f167b8f9a06aa5b28e205808026ee8c189579e802686d4f0ed7e2bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 23 22:55:37.727804 containerd[1546]: time="2025-11-23T22:55:37.727681090Z" level=info msg="Container 88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:37.738824 containerd[1546]: time="2025-11-23T22:55:37.738611278Z" level=info msg="CreateContainer within sandbox \"2f2906da7f167b8f9a06aa5b28e205808026ee8c189579e802686d4f0ed7e2bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9\"" Nov 23 22:55:37.740374 containerd[1546]: time="2025-11-23T22:55:37.739590179Z" level=info msg="StartContainer for \"88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9\"" Nov 23 22:55:37.742571 containerd[1546]: time="2025-11-23T22:55:37.742273927Z" level=info msg="connecting to shim 88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9" address="unix:///run/containerd/s/99aedd761cbcb1fc6cb9d424d56619310bc5fd4509988955c0a9c5ec43ed0e0b" protocol=ttrpc version=3 Nov 23 22:55:37.746397 containerd[1546]: time="2025-11-23T22:55:37.746336048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dzs5d,Uid:bd527f36-7519-4439-9e9c-5324af41751d,Namespace:tigera-operator,Attempt:0,}" Nov 23 22:55:37.766294 systemd[1]: Started cri-containerd-88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9.scope - libcontainer container 88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9. Nov 23 22:55:37.775938 containerd[1546]: time="2025-11-23T22:55:37.775704840Z" level=info msg="connecting to shim 818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab" address="unix:///run/containerd/s/7cec225c53bff478a7084912a4436140e05ce4396454f292afd43c8989891d95" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:55:37.805018 systemd[1]: Started cri-containerd-818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab.scope - libcontainer container 818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab. Nov 23 22:55:37.852210 containerd[1546]: time="2025-11-23T22:55:37.851498571Z" level=info msg="StartContainer for \"88dccd3e05d66fd585092c85be4260b8ba9544225efab983988bd0ffa1e419c9\" returns successfully" Nov 23 22:55:37.878356 containerd[1546]: time="2025-11-23T22:55:37.877395590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dzs5d,Uid:bd527f36-7519-4439-9e9c-5324af41751d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab\"" Nov 23 22:55:37.884198 containerd[1546]: time="2025-11-23T22:55:37.884151579Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 23 22:55:38.497953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3097327735.mount: Deactivated successfully. Nov 23 22:55:39.682428 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2460847736.mount: Deactivated successfully. Nov 23 22:55:40.639252 containerd[1546]: time="2025-11-23T22:55:40.639154360Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:40.640885 containerd[1546]: time="2025-11-23T22:55:40.640846092Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 23 22:55:40.641810 containerd[1546]: time="2025-11-23T22:55:40.641765637Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:40.646192 containerd[1546]: time="2025-11-23T22:55:40.646099006Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:55:40.646905 containerd[1546]: time="2025-11-23T22:55:40.646835033Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.76118212s" Nov 23 22:55:40.646905 containerd[1546]: time="2025-11-23T22:55:40.646878873Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 23 22:55:40.651952 containerd[1546]: time="2025-11-23T22:55:40.651887190Z" level=info msg="CreateContainer within sandbox \"818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 23 22:55:40.665458 containerd[1546]: time="2025-11-23T22:55:40.664955694Z" level=info msg="Container 8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:55:40.670785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3684970375.mount: Deactivated successfully. Nov 23 22:55:40.675401 containerd[1546]: time="2025-11-23T22:55:40.675320723Z" level=info msg="CreateContainer within sandbox \"818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7\"" Nov 23 22:55:40.676789 containerd[1546]: time="2025-11-23T22:55:40.676752899Z" level=info msg="StartContainer for \"8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7\"" Nov 23 22:55:40.678239 containerd[1546]: time="2025-11-23T22:55:40.678167676Z" level=info msg="connecting to shim 8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7" address="unix:///run/containerd/s/7cec225c53bff478a7084912a4436140e05ce4396454f292afd43c8989891d95" protocol=ttrpc version=3 Nov 23 22:55:40.703249 systemd[1]: Started cri-containerd-8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7.scope - libcontainer container 8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7. Nov 23 22:55:40.746399 containerd[1546]: time="2025-11-23T22:55:40.746310670Z" level=info msg="StartContainer for \"8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7\" returns successfully" Nov 23 22:55:40.980283 kubelet[2763]: I1123 22:55:40.980195 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tb6xn" podStartSLOduration=3.980168887 podStartE2EDuration="3.980168887s" podCreationTimestamp="2025-11-23 22:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:55:38.740199305 +0000 UTC m=+7.226317616" watchObservedRunningTime="2025-11-23 22:55:40.980168887 +0000 UTC m=+9.466287238" Nov 23 22:55:41.764132 kubelet[2763]: I1123 22:55:41.763389 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dzs5d" podStartSLOduration=1.997868899 podStartE2EDuration="4.763366624s" podCreationTimestamp="2025-11-23 22:55:37 +0000 UTC" firstStartedPulling="2025-11-23 22:55:37.88307264 +0000 UTC m=+6.369190951" lastFinishedPulling="2025-11-23 22:55:40.648570325 +0000 UTC m=+9.134688676" observedRunningTime="2025-11-23 22:55:41.763078428 +0000 UTC m=+10.249196779" watchObservedRunningTime="2025-11-23 22:55:41.763366624 +0000 UTC m=+10.249484935" Nov 23 22:55:46.960016 sudo[1817]: pam_unix(sudo:session): session closed for user root Nov 23 22:55:47.118265 sshd[1816]: Connection closed by 139.178.89.65 port 60872 Nov 23 22:55:47.117064 sshd-session[1813]: pam_unix(sshd:session): session closed for user core Nov 23 22:55:47.123335 systemd[1]: sshd@6-188.245.196.203:22-139.178.89.65:60872.service: Deactivated successfully. Nov 23 22:55:47.127072 systemd[1]: session-7.scope: Deactivated successfully. Nov 23 22:55:47.128602 systemd[1]: session-7.scope: Consumed 6.200s CPU time, 223.8M memory peak. Nov 23 22:55:47.131758 systemd-logind[1528]: Session 7 logged out. Waiting for processes to exit. Nov 23 22:55:47.133998 systemd-logind[1528]: Removed session 7. Nov 23 22:55:59.492364 systemd[1]: Created slice kubepods-besteffort-pod5fb3b340_9aaa_4e23_bc2a_38b8c14011bf.slice - libcontainer container kubepods-besteffort-pod5fb3b340_9aaa_4e23_bc2a_38b8c14011bf.slice. Nov 23 22:55:59.499321 kubelet[2763]: W1123 22:55:59.499270 2763 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4459-1-2-5-0c65a92823" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object Nov 23 22:55:59.499901 kubelet[2763]: E1123 22:55:59.499847 2763 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" logger="UnhandledError" Nov 23 22:55:59.500934 kubelet[2763]: W1123 22:55:59.500826 2763 reflector.go:569] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459-1-2-5-0c65a92823" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object Nov 23 22:55:59.500934 kubelet[2763]: E1123 22:55:59.500895 2763 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" logger="UnhandledError" Nov 23 22:55:59.501424 kubelet[2763]: W1123 22:55:59.501370 2763 reflector.go:569] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4459-1-2-5-0c65a92823" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object Nov 23 22:55:59.501424 kubelet[2763]: E1123 22:55:59.501400 2763 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" logger="UnhandledError" Nov 23 22:55:59.522610 kubelet[2763]: I1123 22:55:59.522539 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fb3b340-9aaa-4e23-bc2a-38b8c14011bf-tigera-ca-bundle\") pod \"calico-typha-6c8975f585-795cx\" (UID: \"5fb3b340-9aaa-4e23-bc2a-38b8c14011bf\") " pod="calico-system/calico-typha-6c8975f585-795cx" Nov 23 22:55:59.524097 kubelet[2763]: I1123 22:55:59.523817 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5fb3b340-9aaa-4e23-bc2a-38b8c14011bf-typha-certs\") pod \"calico-typha-6c8975f585-795cx\" (UID: \"5fb3b340-9aaa-4e23-bc2a-38b8c14011bf\") " pod="calico-system/calico-typha-6c8975f585-795cx" Nov 23 22:55:59.524097 kubelet[2763]: I1123 22:55:59.524019 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mp6\" (UniqueName: \"kubernetes.io/projected/5fb3b340-9aaa-4e23-bc2a-38b8c14011bf-kube-api-access-76mp6\") pod \"calico-typha-6c8975f585-795cx\" (UID: \"5fb3b340-9aaa-4e23-bc2a-38b8c14011bf\") " pod="calico-system/calico-typha-6c8975f585-795cx" Nov 23 22:55:59.682960 systemd[1]: Created slice kubepods-besteffort-pode79e56ba_7e42_4cbb_af9f_b856a3489e87.slice - libcontainer container kubepods-besteffort-pode79e56ba_7e42_4cbb_af9f_b856a3489e87.slice. Nov 23 22:55:59.725672 kubelet[2763]: I1123 22:55:59.725633 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-xtables-lock\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.725915 kubelet[2763]: I1123 22:55:59.725902 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-cni-net-dir\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726073 kubelet[2763]: I1123 22:55:59.726034 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2dz\" (UniqueName: \"kubernetes.io/projected/e79e56ba-7e42-4cbb-af9f-b856a3489e87-kube-api-access-5s2dz\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726190 kubelet[2763]: I1123 22:55:59.726138 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-policysync\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726328 kubelet[2763]: I1123 22:55:59.726249 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-lib-modules\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726328 kubelet[2763]: I1123 22:55:59.726271 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e79e56ba-7e42-4cbb-af9f-b856a3489e87-tigera-ca-bundle\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726511 kubelet[2763]: I1123 22:55:59.726287 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-var-lib-calico\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726641 kubelet[2763]: I1123 22:55:59.726598 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e79e56ba-7e42-4cbb-af9f-b856a3489e87-node-certs\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726712 kubelet[2763]: I1123 22:55:59.726624 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-var-run-calico\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.726853 kubelet[2763]: I1123 22:55:59.726841 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-flexvol-driver-host\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.727042 kubelet[2763]: I1123 22:55:59.726930 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-cni-bin-dir\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.727042 kubelet[2763]: I1123 22:55:59.726950 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e79e56ba-7e42-4cbb-af9f-b856a3489e87-cni-log-dir\") pod \"calico-node-fkqxl\" (UID: \"e79e56ba-7e42-4cbb-af9f-b856a3489e87\") " pod="calico-system/calico-node-fkqxl" Nov 23 22:55:59.831790 kubelet[2763]: E1123 22:55:59.830907 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.831790 kubelet[2763]: W1123 22:55:59.830939 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.831790 kubelet[2763]: E1123 22:55:59.830963 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.836036 kubelet[2763]: E1123 22:55:59.835854 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.836036 kubelet[2763]: W1123 22:55:59.835883 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.836036 kubelet[2763]: E1123 22:55:59.835912 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.837153 kubelet[2763]: E1123 22:55:59.837128 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.837292 kubelet[2763]: W1123 22:55:59.837276 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.837437 kubelet[2763]: E1123 22:55:59.837420 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.839763 kubelet[2763]: E1123 22:55:59.839582 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.839869 kubelet[2763]: W1123 22:55:59.839823 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.839976 kubelet[2763]: E1123 22:55:59.839922 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.840371 kubelet[2763]: E1123 22:55:59.840354 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.840419 kubelet[2763]: W1123 22:55:59.840371 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.840419 kubelet[2763]: E1123 22:55:59.840388 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.874004 kubelet[2763]: E1123 22:55:59.873158 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:55:59.907317 kubelet[2763]: E1123 22:55:59.907265 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.907317 kubelet[2763]: W1123 22:55:59.907291 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.907979 kubelet[2763]: E1123 22:55:59.907333 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.908147 kubelet[2763]: E1123 22:55:59.908122 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.908220 kubelet[2763]: W1123 22:55:59.908140 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.908220 kubelet[2763]: E1123 22:55:59.908191 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.908401 kubelet[2763]: E1123 22:55:59.908377 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.908401 kubelet[2763]: W1123 22:55:59.908392 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.908401 kubelet[2763]: E1123 22:55:59.908403 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.908550 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909397 kubelet[2763]: W1123 22:55:59.908558 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.908566 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.908695 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909397 kubelet[2763]: W1123 22:55:59.908702 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.908713 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.908855 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909397 kubelet[2763]: W1123 22:55:59.908862 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.908871 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909397 kubelet[2763]: E1123 22:55:59.909011 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909936 kubelet[2763]: W1123 22:55:59.909019 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.909936 kubelet[2763]: E1123 22:55:59.909028 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909936 kubelet[2763]: E1123 22:55:59.909163 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909936 kubelet[2763]: W1123 22:55:59.909172 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.909936 kubelet[2763]: E1123 22:55:59.909181 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909936 kubelet[2763]: E1123 22:55:59.909651 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909936 kubelet[2763]: W1123 22:55:59.909664 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.909936 kubelet[2763]: E1123 22:55:59.909679 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.909936 kubelet[2763]: E1123 22:55:59.909868 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.909936 kubelet[2763]: W1123 22:55:59.909877 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.909885 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.910006 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911081 kubelet[2763]: W1123 22:55:59.910013 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.910023 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.910133 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911081 kubelet[2763]: W1123 22:55:59.910141 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.910148 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.910265 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911081 kubelet[2763]: W1123 22:55:59.910271 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911081 kubelet[2763]: E1123 22:55:59.910278 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910391 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911348 kubelet[2763]: W1123 22:55:59.910400 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910407 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910528 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911348 kubelet[2763]: W1123 22:55:59.910537 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910546 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910659 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911348 kubelet[2763]: W1123 22:55:59.910667 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910675 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911348 kubelet[2763]: E1123 22:55:59.910852 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911545 kubelet[2763]: W1123 22:55:59.910861 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911545 kubelet[2763]: E1123 22:55:59.910871 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911545 kubelet[2763]: E1123 22:55:59.911021 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911545 kubelet[2763]: W1123 22:55:59.911030 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911545 kubelet[2763]: E1123 22:55:59.911039 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911545 kubelet[2763]: E1123 22:55:59.911190 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911545 kubelet[2763]: W1123 22:55:59.911199 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.911545 kubelet[2763]: E1123 22:55:59.911209 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.911545 kubelet[2763]: E1123 22:55:59.911369 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.911545 kubelet[2763]: W1123 22:55:59.911379 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.912401 kubelet[2763]: E1123 22:55:59.911391 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.929838 kubelet[2763]: E1123 22:55:59.929487 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.931674 kubelet[2763]: W1123 22:55:59.931607 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.931674 kubelet[2763]: E1123 22:55:59.931667 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.931872 kubelet[2763]: I1123 22:55:59.931698 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8226f51c-b67c-40ab-9e53-94d216a79ce7-registration-dir\") pod \"csi-node-driver-zjft2\" (UID: \"8226f51c-b67c-40ab-9e53-94d216a79ce7\") " pod="calico-system/csi-node-driver-zjft2" Nov 23 22:55:59.932327 kubelet[2763]: E1123 22:55:59.932285 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.932327 kubelet[2763]: W1123 22:55:59.932318 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.932417 kubelet[2763]: E1123 22:55:59.932335 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.932417 kubelet[2763]: I1123 22:55:59.932359 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8226f51c-b67c-40ab-9e53-94d216a79ce7-varrun\") pod \"csi-node-driver-zjft2\" (UID: \"8226f51c-b67c-40ab-9e53-94d216a79ce7\") " pod="calico-system/csi-node-driver-zjft2" Nov 23 22:55:59.933189 kubelet[2763]: E1123 22:55:59.933159 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.933267 kubelet[2763]: W1123 22:55:59.933194 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.933267 kubelet[2763]: E1123 22:55:59.933227 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.933368 kubelet[2763]: I1123 22:55:59.933268 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8226f51c-b67c-40ab-9e53-94d216a79ce7-kubelet-dir\") pod \"csi-node-driver-zjft2\" (UID: \"8226f51c-b67c-40ab-9e53-94d216a79ce7\") " pod="calico-system/csi-node-driver-zjft2" Nov 23 22:55:59.934604 kubelet[2763]: E1123 22:55:59.934071 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.934604 kubelet[2763]: W1123 22:55:59.934130 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.934604 kubelet[2763]: E1123 22:55:59.934172 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.934604 kubelet[2763]: I1123 22:55:59.934216 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8226f51c-b67c-40ab-9e53-94d216a79ce7-socket-dir\") pod \"csi-node-driver-zjft2\" (UID: \"8226f51c-b67c-40ab-9e53-94d216a79ce7\") " pod="calico-system/csi-node-driver-zjft2" Nov 23 22:55:59.935282 kubelet[2763]: E1123 22:55:59.935245 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.935282 kubelet[2763]: W1123 22:55:59.935279 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.935706 kubelet[2763]: E1123 22:55:59.935662 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.935912 kubelet[2763]: I1123 22:55:59.935801 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4prqc\" (UniqueName: \"kubernetes.io/projected/8226f51c-b67c-40ab-9e53-94d216a79ce7-kube-api-access-4prqc\") pod \"csi-node-driver-zjft2\" (UID: \"8226f51c-b67c-40ab-9e53-94d216a79ce7\") " pod="calico-system/csi-node-driver-zjft2" Nov 23 22:55:59.936827 kubelet[2763]: E1123 22:55:59.936264 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.936827 kubelet[2763]: W1123 22:55:59.936789 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.936976 kubelet[2763]: E1123 22:55:59.936909 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.938279 kubelet[2763]: E1123 22:55:59.938028 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.938279 kubelet[2763]: W1123 22:55:59.938067 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.938279 kubelet[2763]: E1123 22:55:59.938231 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.938639 kubelet[2763]: E1123 22:55:59.938610 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.938794 kubelet[2763]: W1123 22:55:59.938645 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.938794 kubelet[2763]: E1123 22:55:59.938758 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.939010 kubelet[2763]: E1123 22:55:59.938985 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.939010 kubelet[2763]: W1123 22:55:59.939000 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.939156 kubelet[2763]: E1123 22:55:59.939083 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.939191 kubelet[2763]: E1123 22:55:59.939174 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.939191 kubelet[2763]: W1123 22:55:59.939182 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.939337 kubelet[2763]: E1123 22:55:59.939258 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.939371 kubelet[2763]: E1123 22:55:59.939352 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.939371 kubelet[2763]: W1123 22:55:59.939359 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.939371 kubelet[2763]: E1123 22:55:59.939367 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.939741 kubelet[2763]: E1123 22:55:59.939706 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.939808 kubelet[2763]: W1123 22:55:59.939747 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.939808 kubelet[2763]: E1123 22:55:59.939759 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.940027 kubelet[2763]: E1123 22:55:59.939944 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.940027 kubelet[2763]: W1123 22:55:59.939958 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.940027 kubelet[2763]: E1123 22:55:59.939967 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.940760 kubelet[2763]: E1123 22:55:59.940204 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.940760 kubelet[2763]: W1123 22:55:59.940222 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.940760 kubelet[2763]: E1123 22:55:59.940234 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:55:59.940760 kubelet[2763]: E1123 22:55:59.940542 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:55:59.940760 kubelet[2763]: W1123 22:55:59.940553 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:55:59.940760 kubelet[2763]: E1123 22:55:59.940566 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.037639 kubelet[2763]: E1123 22:56:00.037369 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.037639 kubelet[2763]: W1123 22:56:00.037416 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.037639 kubelet[2763]: E1123 22:56:00.037445 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.038712 kubelet[2763]: E1123 22:56:00.038533 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.038712 kubelet[2763]: W1123 22:56:00.038557 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.038712 kubelet[2763]: E1123 22:56:00.038578 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.039053 kubelet[2763]: E1123 22:56:00.039038 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.039135 kubelet[2763]: W1123 22:56:00.039119 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.039211 kubelet[2763]: E1123 22:56:00.039197 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.039519 kubelet[2763]: E1123 22:56:00.039504 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.039759 kubelet[2763]: W1123 22:56:00.039603 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.039759 kubelet[2763]: E1123 22:56:00.039626 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.040003 kubelet[2763]: E1123 22:56:00.039989 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.040190 kubelet[2763]: W1123 22:56:00.040070 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.040190 kubelet[2763]: E1123 22:56:00.040089 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.040708 kubelet[2763]: E1123 22:56:00.040462 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.040708 kubelet[2763]: W1123 22:56:00.040482 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.040708 kubelet[2763]: E1123 22:56:00.040502 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.040934 kubelet[2763]: E1123 22:56:00.040913 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.040934 kubelet[2763]: W1123 22:56:00.040931 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.041003 kubelet[2763]: E1123 22:56:00.040949 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.041153 kubelet[2763]: E1123 22:56:00.041134 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.041153 kubelet[2763]: W1123 22:56:00.041148 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.041213 kubelet[2763]: E1123 22:56:00.041158 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.041571 kubelet[2763]: E1123 22:56:00.041446 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.041571 kubelet[2763]: W1123 22:56:00.041463 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.041571 kubelet[2763]: E1123 22:56:00.041489 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.041770 kubelet[2763]: E1123 22:56:00.041757 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.041925 kubelet[2763]: W1123 22:56:00.041816 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.041925 kubelet[2763]: E1123 22:56:00.041850 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.042088 kubelet[2763]: E1123 22:56:00.042074 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.042147 kubelet[2763]: W1123 22:56:00.042136 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.042222 kubelet[2763]: E1123 22:56:00.042203 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.042441 kubelet[2763]: E1123 22:56:00.042425 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.042513 kubelet[2763]: W1123 22:56:00.042500 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.042594 kubelet[2763]: E1123 22:56:00.042574 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.042830 kubelet[2763]: E1123 22:56:00.042814 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.043047 kubelet[2763]: W1123 22:56:00.042899 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.043047 kubelet[2763]: E1123 22:56:00.042932 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.043193 kubelet[2763]: E1123 22:56:00.043178 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.043249 kubelet[2763]: W1123 22:56:00.043236 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.043414 kubelet[2763]: E1123 22:56:00.043384 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.043622 kubelet[2763]: E1123 22:56:00.043605 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.043701 kubelet[2763]: W1123 22:56:00.043687 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.043809 kubelet[2763]: E1123 22:56:00.043784 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.044153 kubelet[2763]: E1123 22:56:00.044002 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.044153 kubelet[2763]: W1123 22:56:00.044056 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.044153 kubelet[2763]: E1123 22:56:00.044082 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.044345 kubelet[2763]: E1123 22:56:00.044328 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.044406 kubelet[2763]: W1123 22:56:00.044394 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.044540 kubelet[2763]: E1123 22:56:00.044514 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.044759 kubelet[2763]: E1123 22:56:00.044732 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.044820 kubelet[2763]: W1123 22:56:00.044808 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.044933 kubelet[2763]: E1123 22:56:00.044908 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.045168 kubelet[2763]: E1123 22:56:00.045151 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.045236 kubelet[2763]: W1123 22:56:00.045224 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.045335 kubelet[2763]: E1123 22:56:00.045292 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.045758 kubelet[2763]: E1123 22:56:00.045585 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.045758 kubelet[2763]: W1123 22:56:00.045603 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.045758 kubelet[2763]: E1123 22:56:00.045629 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.045957 kubelet[2763]: E1123 22:56:00.045942 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.046010 kubelet[2763]: W1123 22:56:00.045998 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.046093 kubelet[2763]: E1123 22:56:00.046067 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.046501 kubelet[2763]: E1123 22:56:00.046361 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.046501 kubelet[2763]: W1123 22:56:00.046378 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.046501 kubelet[2763]: E1123 22:56:00.046402 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.046682 kubelet[2763]: E1123 22:56:00.046667 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.046771 kubelet[2763]: W1123 22:56:00.046758 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.046837 kubelet[2763]: E1123 22:56:00.046825 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.047714 kubelet[2763]: E1123 22:56:00.047425 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.047714 kubelet[2763]: W1123 22:56:00.047447 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.047714 kubelet[2763]: E1123 22:56:00.047571 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.049457 kubelet[2763]: E1123 22:56:00.049426 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.049457 kubelet[2763]: W1123 22:56:00.049449 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.049705 kubelet[2763]: E1123 22:56:00.049506 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.496271 kubelet[2763]: E1123 22:56:00.496205 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.496509 kubelet[2763]: W1123 22:56:00.496428 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.496509 kubelet[2763]: E1123 22:56:00.496458 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.500424 kubelet[2763]: E1123 22:56:00.500337 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.500424 kubelet[2763]: W1123 22:56:00.500363 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.500424 kubelet[2763]: E1123 22:56:00.500383 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.519705 kubelet[2763]: E1123 22:56:00.519562 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.519705 kubelet[2763]: W1123 22:56:00.519597 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.519705 kubelet[2763]: E1123 22:56:00.519625 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.524637 kubelet[2763]: E1123 22:56:00.524419 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.524637 kubelet[2763]: W1123 22:56:00.524455 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.524637 kubelet[2763]: E1123 22:56:00.524484 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.529519 kubelet[2763]: E1123 22:56:00.529188 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.529519 kubelet[2763]: W1123 22:56:00.529440 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.529519 kubelet[2763]: E1123 22:56:00.529468 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.590706 containerd[1546]: time="2025-11-23T22:56:00.590643765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fkqxl,Uid:e79e56ba-7e42-4cbb-af9f-b856a3489e87,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:00.628004 kubelet[2763]: E1123 22:56:00.627949 2763 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:00.628933 kubelet[2763]: E1123 22:56:00.628876 2763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fb3b340-9aaa-4e23-bc2a-38b8c14011bf-typha-certs podName:5fb3b340-9aaa-4e23-bc2a-38b8c14011bf nodeName:}" failed. No retries permitted until 2025-11-23 22:56:01.128842224 +0000 UTC m=+29.614960535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/5fb3b340-9aaa-4e23-bc2a-38b8c14011bf-typha-certs") pod "calico-typha-6c8975f585-795cx" (UID: "5fb3b340-9aaa-4e23-bc2a-38b8c14011bf") : failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:00.640993 containerd[1546]: time="2025-11-23T22:56:00.640935261Z" level=info msg="connecting to shim 348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02" address="unix:///run/containerd/s/f2b4da2867781b60899f82f4986b8c78784bba985bf254c76dee6648e06c9f35" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:00.644473 kubelet[2763]: E1123 22:56:00.644431 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.644473 kubelet[2763]: W1123 22:56:00.644460 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.644784 kubelet[2763]: E1123 22:56:00.644485 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.669039 systemd[1]: Started cri-containerd-348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02.scope - libcontainer container 348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02. Nov 23 22:56:00.701694 containerd[1546]: time="2025-11-23T22:56:00.701625725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fkqxl,Uid:e79e56ba-7e42-4cbb-af9f-b856a3489e87,Namespace:calico-system,Attempt:0,} returns sandbox id \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\"" Nov 23 22:56:00.703716 containerd[1546]: time="2025-11-23T22:56:00.703567112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 23 22:56:00.746988 kubelet[2763]: E1123 22:56:00.746836 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.746988 kubelet[2763]: W1123 22:56:00.746876 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.746988 kubelet[2763]: E1123 22:56:00.746919 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.848657 kubelet[2763]: E1123 22:56:00.848472 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.848657 kubelet[2763]: W1123 22:56:00.848515 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.848657 kubelet[2763]: E1123 22:56:00.848543 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:00.950309 kubelet[2763]: E1123 22:56:00.950193 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:00.950309 kubelet[2763]: W1123 22:56:00.950250 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:00.950309 kubelet[2763]: E1123 22:56:00.950281 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.051958 kubelet[2763]: E1123 22:56:01.051600 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.051958 kubelet[2763]: W1123 22:56:01.051715 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.051958 kubelet[2763]: E1123 22:56:01.051791 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.154105 kubelet[2763]: E1123 22:56:01.153897 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.154105 kubelet[2763]: W1123 22:56:01.153932 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.154105 kubelet[2763]: E1123 22:56:01.153962 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.154581 kubelet[2763]: E1123 22:56:01.154413 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.154581 kubelet[2763]: W1123 22:56:01.154430 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.154581 kubelet[2763]: E1123 22:56:01.154448 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.154887 kubelet[2763]: E1123 22:56:01.154751 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.154887 kubelet[2763]: W1123 22:56:01.154768 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.154887 kubelet[2763]: E1123 22:56:01.154792 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.155100 kubelet[2763]: E1123 22:56:01.154967 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.155100 kubelet[2763]: W1123 22:56:01.154979 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.155100 kubelet[2763]: E1123 22:56:01.154991 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.155333 kubelet[2763]: E1123 22:56:01.155214 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.155333 kubelet[2763]: W1123 22:56:01.155236 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.155333 kubelet[2763]: E1123 22:56:01.155251 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.163039 kubelet[2763]: E1123 22:56:01.162997 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 23 22:56:01.163039 kubelet[2763]: W1123 22:56:01.163020 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 23 22:56:01.163039 kubelet[2763]: E1123 22:56:01.163043 2763 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 23 22:56:01.298330 containerd[1546]: time="2025-11-23T22:56:01.298265867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8975f585-795cx,Uid:5fb3b340-9aaa-4e23-bc2a-38b8c14011bf,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:01.332568 containerd[1546]: time="2025-11-23T22:56:01.332355642Z" level=info msg="connecting to shim fd9204835e694259be0d84ae8a8a00edb1d961838de0ee4dd9cb10761a956ab0" address="unix:///run/containerd/s/d1518b2d145243ad923698f021cde97d0e999b16c8b6a81b11a6c4c4547972af" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:01.371292 systemd[1]: Started cri-containerd-fd9204835e694259be0d84ae8a8a00edb1d961838de0ee4dd9cb10761a956ab0.scope - libcontainer container fd9204835e694259be0d84ae8a8a00edb1d961838de0ee4dd9cb10761a956ab0. Nov 23 22:56:01.431389 containerd[1546]: time="2025-11-23T22:56:01.431258307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c8975f585-795cx,Uid:5fb3b340-9aaa-4e23-bc2a-38b8c14011bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd9204835e694259be0d84ae8a8a00edb1d961838de0ee4dd9cb10761a956ab0\"" Nov 23 22:56:01.642823 kubelet[2763]: E1123 22:56:01.642539 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:02.256692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645805458.mount: Deactivated successfully. Nov 23 22:56:02.332164 containerd[1546]: time="2025-11-23T22:56:02.332100855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:02.333744 containerd[1546]: time="2025-11-23T22:56:02.333677565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Nov 23 22:56:02.335351 containerd[1546]: time="2025-11-23T22:56:02.335264715Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:02.340218 containerd[1546]: time="2025-11-23T22:56:02.340127683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:02.341143 containerd[1546]: time="2025-11-23T22:56:02.340697120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.63684057s" Nov 23 22:56:02.341143 containerd[1546]: time="2025-11-23T22:56:02.340756639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 23 22:56:02.343507 containerd[1546]: time="2025-11-23T22:56:02.343374903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 23 22:56:02.345980 containerd[1546]: time="2025-11-23T22:56:02.345911046Z" level=info msg="CreateContainer within sandbox \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 23 22:56:02.357903 containerd[1546]: time="2025-11-23T22:56:02.357842290Z" level=info msg="Container 9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:02.372197 containerd[1546]: time="2025-11-23T22:56:02.372133078Z" level=info msg="CreateContainer within sandbox \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315\"" Nov 23 22:56:02.373813 containerd[1546]: time="2025-11-23T22:56:02.373530270Z" level=info msg="StartContainer for \"9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315\"" Nov 23 22:56:02.378303 containerd[1546]: time="2025-11-23T22:56:02.378232279Z" level=info msg="connecting to shim 9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315" address="unix:///run/containerd/s/f2b4da2867781b60899f82f4986b8c78784bba985bf254c76dee6648e06c9f35" protocol=ttrpc version=3 Nov 23 22:56:02.408192 systemd[1]: Started cri-containerd-9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315.scope - libcontainer container 9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315. Nov 23 22:56:02.495016 containerd[1546]: time="2025-11-23T22:56:02.494958812Z" level=info msg="StartContainer for \"9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315\" returns successfully" Nov 23 22:56:02.516920 systemd[1]: cri-containerd-9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315.scope: Deactivated successfully. Nov 23 22:56:02.527488 containerd[1546]: time="2025-11-23T22:56:02.527417284Z" level=info msg="received container exit event container_id:\"9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315\" id:\"9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315\" pid:3374 exited_at:{seconds:1763938562 nanos:526860168}" Nov 23 22:56:02.559383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ebf3dd6da805300e90eff894e9e67c6181fcc7fd1110d840fc2ca882923c315-rootfs.mount: Deactivated successfully. Nov 23 22:56:03.643104 kubelet[2763]: E1123 22:56:03.642972 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:04.241264 containerd[1546]: time="2025-11-23T22:56:04.241190723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:04.242821 containerd[1546]: time="2025-11-23T22:56:04.242774193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Nov 23 22:56:04.243771 containerd[1546]: time="2025-11-23T22:56:04.243693788Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:04.246939 containerd[1546]: time="2025-11-23T22:56:04.246877409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:04.248129 containerd[1546]: time="2025-11-23T22:56:04.248061522Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.904194262s" Nov 23 22:56:04.248129 containerd[1546]: time="2025-11-23T22:56:04.248116521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 23 22:56:04.250670 containerd[1546]: time="2025-11-23T22:56:04.250397388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 23 22:56:04.269909 containerd[1546]: time="2025-11-23T22:56:04.269514953Z" level=info msg="CreateContainer within sandbox \"fd9204835e694259be0d84ae8a8a00edb1d961838de0ee4dd9cb10761a956ab0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 23 22:56:04.283754 containerd[1546]: time="2025-11-23T22:56:04.281905238Z" level=info msg="Container 22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:04.284588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837771076.mount: Deactivated successfully. Nov 23 22:56:04.295653 containerd[1546]: time="2025-11-23T22:56:04.295602756Z" level=info msg="CreateContainer within sandbox \"fd9204835e694259be0d84ae8a8a00edb1d961838de0ee4dd9cb10761a956ab0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007\"" Nov 23 22:56:04.300631 containerd[1546]: time="2025-11-23T22:56:04.300437487Z" level=info msg="StartContainer for \"22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007\"" Nov 23 22:56:04.306409 containerd[1546]: time="2025-11-23T22:56:04.306295571Z" level=info msg="connecting to shim 22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007" address="unix:///run/containerd/s/d1518b2d145243ad923698f021cde97d0e999b16c8b6a81b11a6c4c4547972af" protocol=ttrpc version=3 Nov 23 22:56:04.331991 systemd[1]: Started cri-containerd-22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007.scope - libcontainer container 22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007. Nov 23 22:56:04.382212 containerd[1546]: time="2025-11-23T22:56:04.382152795Z" level=info msg="StartContainer for \"22d2dd34056cd38d42ee533ecb3322193ff87c489861b7835ae96694b3203007\" returns successfully" Nov 23 22:56:04.841857 kubelet[2763]: I1123 22:56:04.841489 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c8975f585-795cx" podStartSLOduration=3.024966655 podStartE2EDuration="5.841346474s" podCreationTimestamp="2025-11-23 22:55:59 +0000 UTC" firstStartedPulling="2025-11-23 22:56:01.432856736 +0000 UTC m=+29.918975007" lastFinishedPulling="2025-11-23 22:56:04.249236515 +0000 UTC m=+32.735354826" observedRunningTime="2025-11-23 22:56:04.84029732 +0000 UTC m=+33.326415631" watchObservedRunningTime="2025-11-23 22:56:04.841346474 +0000 UTC m=+33.327464785" Nov 23 22:56:05.641764 kubelet[2763]: E1123 22:56:05.640987 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:07.641753 kubelet[2763]: E1123 22:56:07.640937 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:07.694063 containerd[1546]: time="2025-11-23T22:56:07.693986863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:07.696235 containerd[1546]: time="2025-11-23T22:56:07.696160451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 23 22:56:07.697271 containerd[1546]: time="2025-11-23T22:56:07.697207285Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:07.701090 containerd[1546]: time="2025-11-23T22:56:07.700997904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:07.702506 containerd[1546]: time="2025-11-23T22:56:07.701458421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.451013314s" Nov 23 22:56:07.702506 containerd[1546]: time="2025-11-23T22:56:07.701496021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 23 22:56:07.707170 containerd[1546]: time="2025-11-23T22:56:07.707101150Z" level=info msg="CreateContainer within sandbox \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 23 22:56:07.719764 containerd[1546]: time="2025-11-23T22:56:07.719124284Z" level=info msg="Container 0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:07.735907 containerd[1546]: time="2025-11-23T22:56:07.735862712Z" level=info msg="CreateContainer within sandbox \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f\"" Nov 23 22:56:07.737011 containerd[1546]: time="2025-11-23T22:56:07.736893506Z" level=info msg="StartContainer for \"0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f\"" Nov 23 22:56:07.740197 containerd[1546]: time="2025-11-23T22:56:07.740142128Z" level=info msg="connecting to shim 0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f" address="unix:///run/containerd/s/f2b4da2867781b60899f82f4986b8c78784bba985bf254c76dee6648e06c9f35" protocol=ttrpc version=3 Nov 23 22:56:07.771010 systemd[1]: Started cri-containerd-0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f.scope - libcontainer container 0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f. Nov 23 22:56:07.856272 containerd[1546]: time="2025-11-23T22:56:07.856117048Z" level=info msg="StartContainer for \"0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f\" returns successfully" Nov 23 22:56:08.350196 containerd[1546]: time="2025-11-23T22:56:08.350150735Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 23 22:56:08.353449 systemd[1]: cri-containerd-0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f.scope: Deactivated successfully. Nov 23 22:56:08.354176 systemd[1]: cri-containerd-0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f.scope: Consumed 527ms CPU time, 185.9M memory peak, 165.9M written to disk. Nov 23 22:56:08.360206 kubelet[2763]: I1123 22:56:08.360139 2763 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 23 22:56:08.361814 containerd[1546]: time="2025-11-23T22:56:08.360932957Z" level=info msg="received container exit event container_id:\"0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f\" id:\"0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f\" pid:3476 exited_at:{seconds:1763938568 nanos:359633404}" Nov 23 22:56:08.401049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cbb45be3a039081467d66d4d94de3c2504f418daa101b935a344034f5bc818f-rootfs.mount: Deactivated successfully. Nov 23 22:56:08.428031 systemd[1]: Created slice kubepods-burstable-pod380e22d8_9465_4d71_9c40_a3eb5517c805.slice - libcontainer container kubepods-burstable-pod380e22d8_9465_4d71_9c40_a3eb5517c805.slice. Nov 23 22:56:08.430945 kubelet[2763]: I1123 22:56:08.430899 2763 status_manager.go:890] "Failed to get status for pod" podUID="380e22d8-9465-4d71-9c40-a3eb5517c805" pod="kube-system/coredns-668d6bf9bc-lsgjk" err="pods \"coredns-668d6bf9bc-lsgjk\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" Nov 23 22:56:08.439834 kubelet[2763]: W1123 22:56:08.439561 2763 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4459-1-2-5-0c65a92823" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object Nov 23 22:56:08.439834 kubelet[2763]: E1123 22:56:08.439600 2763 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" logger="UnhandledError" Nov 23 22:56:08.439834 kubelet[2763]: W1123 22:56:08.439641 2763 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4459-1-2-5-0c65a92823" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object Nov 23 22:56:08.439834 kubelet[2763]: E1123 22:56:08.439653 2763 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" logger="UnhandledError" Nov 23 22:56:08.439834 kubelet[2763]: W1123 22:56:08.439684 2763 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4459-1-2-5-0c65a92823" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object Nov 23 22:56:08.440047 kubelet[2763]: E1123 22:56:08.439694 2763 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4459-1-2-5-0c65a92823\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4459-1-2-5-0c65a92823' and this object" logger="UnhandledError" Nov 23 22:56:08.459082 systemd[1]: Created slice kubepods-burstable-pod1b7aa383_a5c2_41b3_8b51_d983e5ce1004.slice - libcontainer container kubepods-burstable-pod1b7aa383_a5c2_41b3_8b51_d983e5ce1004.slice. Nov 23 22:56:08.498557 systemd[1]: Created slice kubepods-besteffort-podd80e7c98_e2c6_4469_b5eb_05d06ffc6880.slice - libcontainer container kubepods-besteffort-podd80e7c98_e2c6_4469_b5eb_05d06ffc6880.slice. Nov 23 22:56:08.509196 systemd[1]: Created slice kubepods-besteffort-pod5efde8bf_2f30_47b7_ac7d_0827fb837ab3.slice - libcontainer container kubepods-besteffort-pod5efde8bf_2f30_47b7_ac7d_0827fb837ab3.slice. Nov 23 22:56:08.515870 kubelet[2763]: I1123 22:56:08.514925 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xndw\" (UniqueName: \"kubernetes.io/projected/adda981c-9ce7-4e01-b56b-dc8bfccf049e-kube-api-access-6xndw\") pod \"calico-apiserver-78cb8dfc4-tz5zf\" (UID: \"adda981c-9ce7-4e01-b56b-dc8bfccf049e\") " pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" Nov 23 22:56:08.516075 kubelet[2763]: I1123 22:56:08.516051 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h74n2\" (UniqueName: \"kubernetes.io/projected/1b7aa383-a5c2-41b3-8b51-d983e5ce1004-kube-api-access-h74n2\") pod \"coredns-668d6bf9bc-lvslk\" (UID: \"1b7aa383-a5c2-41b3-8b51-d983e5ce1004\") " pod="kube-system/coredns-668d6bf9bc-lvslk" Nov 23 22:56:08.516108 kubelet[2763]: I1123 22:56:08.516094 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91280d56-7002-4fda-b0e5-b372b6025512-config\") pod \"goldmane-666569f655-ckjtj\" (UID: \"91280d56-7002-4fda-b0e5-b372b6025512\") " pod="calico-system/goldmane-666569f655-ckjtj" Nov 23 22:56:08.516433 kubelet[2763]: I1123 22:56:08.516406 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxn25\" (UniqueName: \"kubernetes.io/projected/380e22d8-9465-4d71-9c40-a3eb5517c805-kube-api-access-xxn25\") pod \"coredns-668d6bf9bc-lsgjk\" (UID: \"380e22d8-9465-4d71-9c40-a3eb5517c805\") " pod="kube-system/coredns-668d6bf9bc-lsgjk" Nov 23 22:56:08.516819 kubelet[2763]: I1123 22:56:08.516448 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-backend-key-pair\") pod \"whisker-674b86bd74-6j679\" (UID: \"64bbc75f-74cd-4202-9b89-037fec03aca5\") " pod="calico-system/whisker-674b86bd74-6j679" Nov 23 22:56:08.517503 kubelet[2763]: I1123 22:56:08.517469 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkmd2\" (UniqueName: \"kubernetes.io/projected/64bbc75f-74cd-4202-9b89-037fec03aca5-kube-api-access-kkmd2\") pod \"whisker-674b86bd74-6j679\" (UID: \"64bbc75f-74cd-4202-9b89-037fec03aca5\") " pod="calico-system/whisker-674b86bd74-6j679" Nov 23 22:56:08.517585 kubelet[2763]: I1123 22:56:08.517506 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b7aa383-a5c2-41b3-8b51-d983e5ce1004-config-volume\") pod \"coredns-668d6bf9bc-lvslk\" (UID: \"1b7aa383-a5c2-41b3-8b51-d983e5ce1004\") " pod="kube-system/coredns-668d6bf9bc-lvslk" Nov 23 22:56:08.517585 kubelet[2763]: I1123 22:56:08.517525 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vqf\" (UniqueName: \"kubernetes.io/projected/a5c13c52-4438-4f33-920f-ea52cca520b8-kube-api-access-j6vqf\") pod \"calico-apiserver-64b6d4565b-wllpg\" (UID: \"a5c13c52-4438-4f33-920f-ea52cca520b8\") " pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" Nov 23 22:56:08.517585 kubelet[2763]: I1123 22:56:08.517546 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d80e7c98-e2c6-4469-b5eb-05d06ffc6880-calico-apiserver-certs\") pod \"calico-apiserver-78cb8dfc4-sks64\" (UID: \"d80e7c98-e2c6-4469-b5eb-05d06ffc6880\") " pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" Nov 23 22:56:08.517585 kubelet[2763]: I1123 22:56:08.517565 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/380e22d8-9465-4d71-9c40-a3eb5517c805-config-volume\") pod \"coredns-668d6bf9bc-lsgjk\" (UID: \"380e22d8-9465-4d71-9c40-a3eb5517c805\") " pod="kube-system/coredns-668d6bf9bc-lsgjk" Nov 23 22:56:08.517585 kubelet[2763]: I1123 22:56:08.517583 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6zq\" (UniqueName: \"kubernetes.io/projected/5efde8bf-2f30-47b7-ac7d-0827fb837ab3-kube-api-access-dn6zq\") pod \"calico-kube-controllers-5c79f46457-wvsqw\" (UID: \"5efde8bf-2f30-47b7-ac7d-0827fb837ab3\") " pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" Nov 23 22:56:08.517791 kubelet[2763]: I1123 22:56:08.517602 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91280d56-7002-4fda-b0e5-b372b6025512-goldmane-ca-bundle\") pod \"goldmane-666569f655-ckjtj\" (UID: \"91280d56-7002-4fda-b0e5-b372b6025512\") " pod="calico-system/goldmane-666569f655-ckjtj" Nov 23 22:56:08.517791 kubelet[2763]: I1123 22:56:08.517619 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5c13c52-4438-4f33-920f-ea52cca520b8-calico-apiserver-certs\") pod \"calico-apiserver-64b6d4565b-wllpg\" (UID: \"a5c13c52-4438-4f33-920f-ea52cca520b8\") " pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" Nov 23 22:56:08.517791 kubelet[2763]: I1123 22:56:08.517636 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qq2j\" (UniqueName: \"kubernetes.io/projected/d80e7c98-e2c6-4469-b5eb-05d06ffc6880-kube-api-access-2qq2j\") pod \"calico-apiserver-78cb8dfc4-sks64\" (UID: \"d80e7c98-e2c6-4469-b5eb-05d06ffc6880\") " pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" Nov 23 22:56:08.517791 kubelet[2763]: I1123 22:56:08.517653 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69hh\" (UniqueName: \"kubernetes.io/projected/91280d56-7002-4fda-b0e5-b372b6025512-kube-api-access-h69hh\") pod \"goldmane-666569f655-ckjtj\" (UID: \"91280d56-7002-4fda-b0e5-b372b6025512\") " pod="calico-system/goldmane-666569f655-ckjtj" Nov 23 22:56:08.517791 kubelet[2763]: I1123 22:56:08.517668 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-ca-bundle\") pod \"whisker-674b86bd74-6j679\" (UID: \"64bbc75f-74cd-4202-9b89-037fec03aca5\") " pod="calico-system/whisker-674b86bd74-6j679" Nov 23 22:56:08.519812 kubelet[2763]: I1123 22:56:08.517686 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5efde8bf-2f30-47b7-ac7d-0827fb837ab3-tigera-ca-bundle\") pod \"calico-kube-controllers-5c79f46457-wvsqw\" (UID: \"5efde8bf-2f30-47b7-ac7d-0827fb837ab3\") " pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" Nov 23 22:56:08.519812 kubelet[2763]: I1123 22:56:08.517703 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/adda981c-9ce7-4e01-b56b-dc8bfccf049e-calico-apiserver-certs\") pod \"calico-apiserver-78cb8dfc4-tz5zf\" (UID: \"adda981c-9ce7-4e01-b56b-dc8bfccf049e\") " pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" Nov 23 22:56:08.519812 kubelet[2763]: I1123 22:56:08.518870 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/91280d56-7002-4fda-b0e5-b372b6025512-goldmane-key-pair\") pod \"goldmane-666569f655-ckjtj\" (UID: \"91280d56-7002-4fda-b0e5-b372b6025512\") " pod="calico-system/goldmane-666569f655-ckjtj" Nov 23 22:56:08.525633 systemd[1]: Created slice kubepods-besteffort-poda5c13c52_4438_4f33_920f_ea52cca520b8.slice - libcontainer container kubepods-besteffort-poda5c13c52_4438_4f33_920f_ea52cca520b8.slice. Nov 23 22:56:08.536007 systemd[1]: Created slice kubepods-besteffort-podadda981c_9ce7_4e01_b56b_dc8bfccf049e.slice - libcontainer container kubepods-besteffort-podadda981c_9ce7_4e01_b56b_dc8bfccf049e.slice. Nov 23 22:56:08.545307 systemd[1]: Created slice kubepods-besteffort-pod91280d56_7002_4fda_b0e5_b372b6025512.slice - libcontainer container kubepods-besteffort-pod91280d56_7002_4fda_b0e5_b372b6025512.slice. Nov 23 22:56:08.556917 systemd[1]: Created slice kubepods-besteffort-pod64bbc75f_74cd_4202_9b89_037fec03aca5.slice - libcontainer container kubepods-besteffort-pod64bbc75f_74cd_4202_9b89_037fec03aca5.slice. Nov 23 22:56:08.819324 containerd[1546]: time="2025-11-23T22:56:08.819147057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c79f46457-wvsqw,Uid:5efde8bf-2f30-47b7-ac7d-0827fb837ab3,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:08.846779 containerd[1546]: time="2025-11-23T22:56:08.846038472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 23 22:56:08.855101 containerd[1546]: time="2025-11-23T22:56:08.854253428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ckjtj,Uid:91280d56-7002-4fda-b0e5-b372b6025512,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:08.866016 containerd[1546]: time="2025-11-23T22:56:08.865967285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674b86bd74-6j679,Uid:64bbc75f-74cd-4202-9b89-037fec03aca5,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:08.969110 containerd[1546]: time="2025-11-23T22:56:08.968827813Z" level=error msg="Failed to destroy network for sandbox \"0ec0f1ddae1ee9eab1ae7df6fc2e8a997b60fd6c314f378447f7bd97156e41bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:08.972922 containerd[1546]: time="2025-11-23T22:56:08.972601873Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c79f46457-wvsqw,Uid:5efde8bf-2f30-47b7-ac7d-0827fb837ab3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec0f1ddae1ee9eab1ae7df6fc2e8a997b60fd6c314f378447f7bd97156e41bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:08.972692 systemd[1]: run-netns-cni\x2d77657a7f\x2dca10\x2d070e\x2d042d\x2dbc184103f6b9.mount: Deactivated successfully. Nov 23 22:56:08.975250 kubelet[2763]: E1123 22:56:08.972938 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec0f1ddae1ee9eab1ae7df6fc2e8a997b60fd6c314f378447f7bd97156e41bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:08.975250 kubelet[2763]: E1123 22:56:08.973015 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec0f1ddae1ee9eab1ae7df6fc2e8a997b60fd6c314f378447f7bd97156e41bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" Nov 23 22:56:08.975250 kubelet[2763]: E1123 22:56:08.973038 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ec0f1ddae1ee9eab1ae7df6fc2e8a997b60fd6c314f378447f7bd97156e41bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" Nov 23 22:56:08.975514 kubelet[2763]: E1123 22:56:08.973090 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ec0f1ddae1ee9eab1ae7df6fc2e8a997b60fd6c314f378447f7bd97156e41bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:56:08.991224 containerd[1546]: time="2025-11-23T22:56:08.991115813Z" level=error msg="Failed to destroy network for sandbox \"fe03da6e31525db8fea5839ec1d9ec69181359413db2320660474c6b6d82befe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:08.992632 containerd[1546]: time="2025-11-23T22:56:08.992571126Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ckjtj,Uid:91280d56-7002-4fda-b0e5-b372b6025512,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe03da6e31525db8fea5839ec1d9ec69181359413db2320660474c6b6d82befe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:08.993338 kubelet[2763]: E1123 22:56:08.992819 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe03da6e31525db8fea5839ec1d9ec69181359413db2320660474c6b6d82befe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:08.993338 kubelet[2763]: E1123 22:56:08.992878 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe03da6e31525db8fea5839ec1d9ec69181359413db2320660474c6b6d82befe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ckjtj" Nov 23 22:56:08.993338 kubelet[2763]: E1123 22:56:08.992898 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe03da6e31525db8fea5839ec1d9ec69181359413db2320660474c6b6d82befe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-ckjtj" Nov 23 22:56:08.993473 kubelet[2763]: E1123 22:56:08.992946 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe03da6e31525db8fea5839ec1d9ec69181359413db2320660474c6b6d82befe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:56:09.010969 containerd[1546]: time="2025-11-23T22:56:09.010908628Z" level=error msg="Failed to destroy network for sandbox \"c63ade480788c1f364542d0804dee2e9722c511ca3c75d7c0607fa5a23b6a941\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:09.012964 containerd[1546]: time="2025-11-23T22:56:09.012867458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-674b86bd74-6j679,Uid:64bbc75f-74cd-4202-9b89-037fec03aca5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63ade480788c1f364542d0804dee2e9722c511ca3c75d7c0607fa5a23b6a941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:09.013351 kubelet[2763]: E1123 22:56:09.013235 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63ade480788c1f364542d0804dee2e9722c511ca3c75d7c0607fa5a23b6a941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:09.013351 kubelet[2763]: E1123 22:56:09.013316 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63ade480788c1f364542d0804dee2e9722c511ca3c75d7c0607fa5a23b6a941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674b86bd74-6j679" Nov 23 22:56:09.013569 kubelet[2763]: E1123 22:56:09.013441 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c63ade480788c1f364542d0804dee2e9722c511ca3c75d7c0607fa5a23b6a941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-674b86bd74-6j679" Nov 23 22:56:09.013755 kubelet[2763]: E1123 22:56:09.013599 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-674b86bd74-6j679_calico-system(64bbc75f-74cd-4202-9b89-037fec03aca5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-674b86bd74-6j679_calico-system(64bbc75f-74cd-4202-9b89-037fec03aca5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c63ade480788c1f364542d0804dee2e9722c511ca3c75d7c0607fa5a23b6a941\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-674b86bd74-6j679" podUID="64bbc75f-74cd-4202-9b89-037fec03aca5" Nov 23 22:56:09.620822 kubelet[2763]: E1123 22:56:09.620745 2763 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:09.621336 kubelet[2763]: E1123 22:56:09.621015 2763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/adda981c-9ce7-4e01-b56b-dc8bfccf049e-calico-apiserver-certs podName:adda981c-9ce7-4e01-b56b-dc8bfccf049e nodeName:}" failed. No retries permitted until 2025-11-23 22:56:10.120990916 +0000 UTC m=+38.607109227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/adda981c-9ce7-4e01-b56b-dc8bfccf049e-calico-apiserver-certs") pod "calico-apiserver-78cb8dfc4-tz5zf" (UID: "adda981c-9ce7-4e01-b56b-dc8bfccf049e") : failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:09.621553 kubelet[2763]: E1123 22:56:09.621533 2763 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:09.621715 kubelet[2763]: E1123 22:56:09.621686 2763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d80e7c98-e2c6-4469-b5eb-05d06ffc6880-calico-apiserver-certs podName:d80e7c98-e2c6-4469-b5eb-05d06ffc6880 nodeName:}" failed. No retries permitted until 2025-11-23 22:56:10.121666593 +0000 UTC m=+38.607784944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d80e7c98-e2c6-4469-b5eb-05d06ffc6880-calico-apiserver-certs") pod "calico-apiserver-78cb8dfc4-sks64" (UID: "d80e7c98-e2c6-4469-b5eb-05d06ffc6880") : failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:09.624046 kubelet[2763]: E1123 22:56:09.623991 2763 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 23 22:56:09.624145 kubelet[2763]: E1123 22:56:09.624112 2763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/380e22d8-9465-4d71-9c40-a3eb5517c805-config-volume podName:380e22d8-9465-4d71-9c40-a3eb5517c805 nodeName:}" failed. No retries permitted until 2025-11-23 22:56:10.12408506 +0000 UTC m=+38.610203411 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/380e22d8-9465-4d71-9c40-a3eb5517c805-config-volume") pod "coredns-668d6bf9bc-lsgjk" (UID: "380e22d8-9465-4d71-9c40-a3eb5517c805") : failed to sync configmap cache: timed out waiting for the condition Nov 23 22:56:09.625224 kubelet[2763]: E1123 22:56:09.625097 2763 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:09.625224 kubelet[2763]: E1123 22:56:09.625176 2763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5c13c52-4438-4f33-920f-ea52cca520b8-calico-apiserver-certs podName:a5c13c52-4438-4f33-920f-ea52cca520b8 nodeName:}" failed. No retries permitted until 2025-11-23 22:56:10.125158935 +0000 UTC m=+38.611277246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a5c13c52-4438-4f33-920f-ea52cca520b8-calico-apiserver-certs") pod "calico-apiserver-64b6d4565b-wllpg" (UID: "a5c13c52-4438-4f33-920f-ea52cca520b8") : failed to sync secret cache: timed out waiting for the condition Nov 23 22:56:09.630587 kubelet[2763]: E1123 22:56:09.630530 2763 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Nov 23 22:56:09.630908 kubelet[2763]: E1123 22:56:09.630623 2763 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1b7aa383-a5c2-41b3-8b51-d983e5ce1004-config-volume podName:1b7aa383-a5c2-41b3-8b51-d983e5ce1004 nodeName:}" failed. No retries permitted until 2025-11-23 22:56:10.130602906 +0000 UTC m=+38.616721217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1b7aa383-a5c2-41b3-8b51-d983e5ce1004-config-volume") pod "coredns-668d6bf9bc-lvslk" (UID: "1b7aa383-a5c2-41b3-8b51-d983e5ce1004") : failed to sync configmap cache: timed out waiting for the condition Nov 23 22:56:09.654080 systemd[1]: Created slice kubepods-besteffort-pod8226f51c_b67c_40ab_9e53_94d216a79ce7.slice - libcontainer container kubepods-besteffort-pod8226f51c_b67c_40ab_9e53_94d216a79ce7.slice. Nov 23 22:56:09.657352 containerd[1546]: time="2025-11-23T22:56:09.657269887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjft2,Uid:8226f51c-b67c-40ab-9e53-94d216a79ce7,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:09.710473 containerd[1546]: time="2025-11-23T22:56:09.710405289Z" level=error msg="Failed to destroy network for sandbox \"8f0e527375a0c3d1f4a93992244ddb45a577433df61fe3a6b4f6aef068a636da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:09.712670 containerd[1546]: time="2025-11-23T22:56:09.712588837Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjft2,Uid:8226f51c-b67c-40ab-9e53-94d216a79ce7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0e527375a0c3d1f4a93992244ddb45a577433df61fe3a6b4f6aef068a636da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:09.713067 kubelet[2763]: E1123 22:56:09.712999 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0e527375a0c3d1f4a93992244ddb45a577433df61fe3a6b4f6aef068a636da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:09.713138 kubelet[2763]: E1123 22:56:09.713107 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0e527375a0c3d1f4a93992244ddb45a577433df61fe3a6b4f6aef068a636da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjft2" Nov 23 22:56:09.713175 kubelet[2763]: E1123 22:56:09.713148 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f0e527375a0c3d1f4a93992244ddb45a577433df61fe3a6b4f6aef068a636da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjft2" Nov 23 22:56:09.713342 kubelet[2763]: E1123 22:56:09.713257 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f0e527375a0c3d1f4a93992244ddb45a577433df61fe3a6b4f6aef068a636da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:09.725569 systemd[1]: run-netns-cni\x2d85fd7abe\x2d5c13\x2dbd5a\x2dd08a\x2d73de10c88c4d.mount: Deactivated successfully. Nov 23 22:56:09.725873 systemd[1]: run-netns-cni\x2d4b1a0b4a\x2dd30e\x2ddc14\x2db338\x2d86035ceaba23.mount: Deactivated successfully. Nov 23 22:56:10.241862 containerd[1546]: time="2025-11-23T22:56:10.241293381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgjk,Uid:380e22d8-9465-4d71-9c40-a3eb5517c805,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:10.285392 containerd[1546]: time="2025-11-23T22:56:10.285261677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvslk,Uid:1b7aa383-a5c2-41b3-8b51-d983e5ce1004,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:10.306011 containerd[1546]: time="2025-11-23T22:56:10.305973651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-sks64,Uid:d80e7c98-e2c6-4469-b5eb-05d06ffc6880,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:10.311830 containerd[1546]: time="2025-11-23T22:56:10.311649622Z" level=error msg="Failed to destroy network for sandbox \"37996727287e42246a8118465e4d309296c8547b580d4236432cdc48bce72af5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.315396 containerd[1546]: time="2025-11-23T22:56:10.315296524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgjk,Uid:380e22d8-9465-4d71-9c40-a3eb5517c805,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37996727287e42246a8118465e4d309296c8547b580d4236432cdc48bce72af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.316059 kubelet[2763]: E1123 22:56:10.315959 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37996727287e42246a8118465e4d309296c8547b580d4236432cdc48bce72af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.316059 kubelet[2763]: E1123 22:56:10.316027 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37996727287e42246a8118465e4d309296c8547b580d4236432cdc48bce72af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lsgjk" Nov 23 22:56:10.316059 kubelet[2763]: E1123 22:56:10.316047 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37996727287e42246a8118465e4d309296c8547b580d4236432cdc48bce72af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lsgjk" Nov 23 22:56:10.317324 kubelet[2763]: E1123 22:56:10.316084 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lsgjk_kube-system(380e22d8-9465-4d71-9c40-a3eb5517c805)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lsgjk_kube-system(380e22d8-9465-4d71-9c40-a3eb5517c805)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37996727287e42246a8118465e4d309296c8547b580d4236432cdc48bce72af5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lsgjk" podUID="380e22d8-9465-4d71-9c40-a3eb5517c805" Nov 23 22:56:10.334171 containerd[1546]: time="2025-11-23T22:56:10.334122908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b6d4565b-wllpg,Uid:a5c13c52-4438-4f33-920f-ea52cca520b8,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:10.344171 containerd[1546]: time="2025-11-23T22:56:10.344115297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-tz5zf,Uid:adda981c-9ce7-4e01-b56b-dc8bfccf049e,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:10.400093 containerd[1546]: time="2025-11-23T22:56:10.399862812Z" level=error msg="Failed to destroy network for sandbox \"e5f8d21beac2bcc9feb2a6e23e9d8594fcb1b00f36852a2c5a649eab32d82b2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.402469 containerd[1546]: time="2025-11-23T22:56:10.402379759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvslk,Uid:1b7aa383-a5c2-41b3-8b51-d983e5ce1004,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f8d21beac2bcc9feb2a6e23e9d8594fcb1b00f36852a2c5a649eab32d82b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.402840 kubelet[2763]: E1123 22:56:10.402673 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f8d21beac2bcc9feb2a6e23e9d8594fcb1b00f36852a2c5a649eab32d82b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.403165 kubelet[2763]: E1123 22:56:10.403120 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f8d21beac2bcc9feb2a6e23e9d8594fcb1b00f36852a2c5a649eab32d82b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lvslk" Nov 23 22:56:10.403262 kubelet[2763]: E1123 22:56:10.403161 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5f8d21beac2bcc9feb2a6e23e9d8594fcb1b00f36852a2c5a649eab32d82b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lvslk" Nov 23 22:56:10.403262 kubelet[2763]: E1123 22:56:10.403233 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lvslk_kube-system(1b7aa383-a5c2-41b3-8b51-d983e5ce1004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lvslk_kube-system(1b7aa383-a5c2-41b3-8b51-d983e5ce1004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5f8d21beac2bcc9feb2a6e23e9d8594fcb1b00f36852a2c5a649eab32d82b2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lvslk" podUID="1b7aa383-a5c2-41b3-8b51-d983e5ce1004" Nov 23 22:56:10.449660 containerd[1546]: time="2025-11-23T22:56:10.449561558Z" level=error msg="Failed to destroy network for sandbox \"e6cf49c088acfb3eaf76e569a1f64cc405717b8fdd13d59001981b5236e51465\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.454511 containerd[1546]: time="2025-11-23T22:56:10.454449733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-sks64,Uid:d80e7c98-e2c6-4469-b5eb-05d06ffc6880,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cf49c088acfb3eaf76e569a1f64cc405717b8fdd13d59001981b5236e51465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.455958 kubelet[2763]: E1123 22:56:10.454701 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cf49c088acfb3eaf76e569a1f64cc405717b8fdd13d59001981b5236e51465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.456079 kubelet[2763]: E1123 22:56:10.455998 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cf49c088acfb3eaf76e569a1f64cc405717b8fdd13d59001981b5236e51465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" Nov 23 22:56:10.456079 kubelet[2763]: E1123 22:56:10.456024 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6cf49c088acfb3eaf76e569a1f64cc405717b8fdd13d59001981b5236e51465\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" Nov 23 22:56:10.456133 kubelet[2763]: E1123 22:56:10.456084 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78cb8dfc4-sks64_calico-apiserver(d80e7c98-e2c6-4469-b5eb-05d06ffc6880)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78cb8dfc4-sks64_calico-apiserver(d80e7c98-e2c6-4469-b5eb-05d06ffc6880)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6cf49c088acfb3eaf76e569a1f64cc405717b8fdd13d59001981b5236e51465\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:56:10.466924 containerd[1546]: time="2025-11-23T22:56:10.466877190Z" level=error msg="Failed to destroy network for sandbox \"596277184a960601b32bc8dccd265ed2a01fbd6e199c8bf640c1d99a382231e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.468396 containerd[1546]: time="2025-11-23T22:56:10.468285703Z" level=error msg="Failed to destroy network for sandbox \"b8c62d7a937d5156400da7284601e834996120eb381293773f2ef324cfdbbb8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.468640 containerd[1546]: time="2025-11-23T22:56:10.468340622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-tz5zf,Uid:adda981c-9ce7-4e01-b56b-dc8bfccf049e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"596277184a960601b32bc8dccd265ed2a01fbd6e199c8bf640c1d99a382231e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.468991 kubelet[2763]: E1123 22:56:10.468885 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"596277184a960601b32bc8dccd265ed2a01fbd6e199c8bf640c1d99a382231e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.468991 kubelet[2763]: E1123 22:56:10.468956 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"596277184a960601b32bc8dccd265ed2a01fbd6e199c8bf640c1d99a382231e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" Nov 23 22:56:10.469237 kubelet[2763]: E1123 22:56:10.468976 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"596277184a960601b32bc8dccd265ed2a01fbd6e199c8bf640c1d99a382231e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" Nov 23 22:56:10.469237 kubelet[2763]: E1123 22:56:10.469065 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78cb8dfc4-tz5zf_calico-apiserver(adda981c-9ce7-4e01-b56b-dc8bfccf049e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78cb8dfc4-tz5zf_calico-apiserver(adda981c-9ce7-4e01-b56b-dc8bfccf049e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"596277184a960601b32bc8dccd265ed2a01fbd6e199c8bf640c1d99a382231e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:56:10.471226 containerd[1546]: time="2025-11-23T22:56:10.471122208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b6d4565b-wllpg,Uid:a5c13c52-4438-4f33-920f-ea52cca520b8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c62d7a937d5156400da7284601e834996120eb381293773f2ef324cfdbbb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.471712 kubelet[2763]: E1123 22:56:10.471551 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c62d7a937d5156400da7284601e834996120eb381293773f2ef324cfdbbb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 23 22:56:10.471712 kubelet[2763]: E1123 22:56:10.471605 2763 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c62d7a937d5156400da7284601e834996120eb381293773f2ef324cfdbbb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" Nov 23 22:56:10.471712 kubelet[2763]: E1123 22:56:10.471630 2763 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8c62d7a937d5156400da7284601e834996120eb381293773f2ef324cfdbbb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" Nov 23 22:56:10.471920 kubelet[2763]: E1123 22:56:10.471664 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8c62d7a937d5156400da7284601e834996120eb381293773f2ef324cfdbbb8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:10.723499 systemd[1]: run-netns-cni\x2da7f8a2ed\x2d97aa\x2d2b33\x2d670f\x2d0fab76cb7862.mount: Deactivated successfully. Nov 23 22:56:15.428760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696097351.mount: Deactivated successfully. Nov 23 22:56:15.451769 containerd[1546]: time="2025-11-23T22:56:15.451093679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:15.452804 containerd[1546]: time="2025-11-23T22:56:15.452768912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 23 22:56:15.454256 containerd[1546]: time="2025-11-23T22:56:15.454214745Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:15.456834 containerd[1546]: time="2025-11-23T22:56:15.456765973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 23 22:56:15.457741 containerd[1546]: time="2025-11-23T22:56:15.457687929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.611275259s" Nov 23 22:56:15.457741 containerd[1546]: time="2025-11-23T22:56:15.457719329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 23 22:56:15.476880 containerd[1546]: time="2025-11-23T22:56:15.476838282Z" level=info msg="CreateContainer within sandbox \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 23 22:56:15.492756 containerd[1546]: time="2025-11-23T22:56:15.492325291Z" level=info msg="Container cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:15.497917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943662211.mount: Deactivated successfully. Nov 23 22:56:15.509534 containerd[1546]: time="2025-11-23T22:56:15.509393453Z" level=info msg="CreateContainer within sandbox \"348fbb66d111ee4c483a745c6fc10f0a85c875b67a0d0d7f6006ff74b98fec02\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b\"" Nov 23 22:56:15.511957 containerd[1546]: time="2025-11-23T22:56:15.511911001Z" level=info msg="StartContainer for \"cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b\"" Nov 23 22:56:15.514290 containerd[1546]: time="2025-11-23T22:56:15.514216791Z" level=info msg="connecting to shim cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b" address="unix:///run/containerd/s/f2b4da2867781b60899f82f4986b8c78784bba985bf254c76dee6648e06c9f35" protocol=ttrpc version=3 Nov 23 22:56:15.538993 systemd[1]: Started cri-containerd-cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b.scope - libcontainer container cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b. Nov 23 22:56:15.630678 containerd[1546]: time="2025-11-23T22:56:15.630524859Z" level=info msg="StartContainer for \"cc6f54abd42942b175a5d6f101feb672ffee836c09f71afd187f1f014475f48b\" returns successfully" Nov 23 22:56:15.784589 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 23 22:56:15.784703 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 23 22:56:15.905972 kubelet[2763]: I1123 22:56:15.905617 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fkqxl" podStartSLOduration=2.149756192 podStartE2EDuration="16.905596361s" podCreationTimestamp="2025-11-23 22:55:59 +0000 UTC" firstStartedPulling="2025-11-23 22:56:00.703239754 +0000 UTC m=+29.189358065" lastFinishedPulling="2025-11-23 22:56:15.459079923 +0000 UTC m=+43.945198234" observedRunningTime="2025-11-23 22:56:15.902223776 +0000 UTC m=+44.388342087" watchObservedRunningTime="2025-11-23 22:56:15.905596361 +0000 UTC m=+44.391714672" Nov 23 22:56:16.083709 kubelet[2763]: I1123 22:56:16.083659 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-backend-key-pair\") pod \"64bbc75f-74cd-4202-9b89-037fec03aca5\" (UID: \"64bbc75f-74cd-4202-9b89-037fec03aca5\") " Nov 23 22:56:16.083963 kubelet[2763]: I1123 22:56:16.083714 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkmd2\" (UniqueName: \"kubernetes.io/projected/64bbc75f-74cd-4202-9b89-037fec03aca5-kube-api-access-kkmd2\") pod \"64bbc75f-74cd-4202-9b89-037fec03aca5\" (UID: \"64bbc75f-74cd-4202-9b89-037fec03aca5\") " Nov 23 22:56:16.084755 kubelet[2763]: I1123 22:56:16.084501 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-ca-bundle\") pod \"64bbc75f-74cd-4202-9b89-037fec03aca5\" (UID: \"64bbc75f-74cd-4202-9b89-037fec03aca5\") " Nov 23 22:56:16.085693 kubelet[2763]: I1123 22:56:16.085129 2763 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "64bbc75f-74cd-4202-9b89-037fec03aca5" (UID: "64bbc75f-74cd-4202-9b89-037fec03aca5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 23 22:56:16.090551 kubelet[2763]: I1123 22:56:16.090487 2763 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "64bbc75f-74cd-4202-9b89-037fec03aca5" (UID: "64bbc75f-74cd-4202-9b89-037fec03aca5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 23 22:56:16.092623 kubelet[2763]: I1123 22:56:16.092555 2763 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64bbc75f-74cd-4202-9b89-037fec03aca5-kube-api-access-kkmd2" (OuterVolumeSpecName: "kube-api-access-kkmd2") pod "64bbc75f-74cd-4202-9b89-037fec03aca5" (UID: "64bbc75f-74cd-4202-9b89-037fec03aca5"). InnerVolumeSpecName "kube-api-access-kkmd2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 23 22:56:16.185347 kubelet[2763]: I1123 22:56:16.185227 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kkmd2\" (UniqueName: \"kubernetes.io/projected/64bbc75f-74cd-4202-9b89-037fec03aca5-kube-api-access-kkmd2\") on node \"ci-4459-1-2-5-0c65a92823\" DevicePath \"\"" Nov 23 22:56:16.185347 kubelet[2763]: I1123 22:56:16.185776 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-ca-bundle\") on node \"ci-4459-1-2-5-0c65a92823\" DevicePath \"\"" Nov 23 22:56:16.185347 kubelet[2763]: I1123 22:56:16.185835 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/64bbc75f-74cd-4202-9b89-037fec03aca5-whisker-backend-key-pair\") on node \"ci-4459-1-2-5-0c65a92823\" DevicePath \"\"" Nov 23 22:56:16.430983 systemd[1]: var-lib-kubelet-pods-64bbc75f\x2d74cd\x2d4202\x2d9b89\x2d037fec03aca5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkmd2.mount: Deactivated successfully. Nov 23 22:56:16.431081 systemd[1]: var-lib-kubelet-pods-64bbc75f\x2d74cd\x2d4202\x2d9b89\x2d037fec03aca5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 23 22:56:16.918622 systemd[1]: Removed slice kubepods-besteffort-pod64bbc75f_74cd_4202_9b89_037fec03aca5.slice - libcontainer container kubepods-besteffort-pod64bbc75f_74cd_4202_9b89_037fec03aca5.slice. Nov 23 22:56:17.011331 systemd[1]: Created slice kubepods-besteffort-pod71fd8f09_1ec1_4a2b_a495_70eb0d66adad.slice - libcontainer container kubepods-besteffort-pod71fd8f09_1ec1_4a2b_a495_70eb0d66adad.slice. Nov 23 22:56:17.092187 kubelet[2763]: I1123 22:56:17.092011 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/71fd8f09-1ec1-4a2b-a495-70eb0d66adad-whisker-backend-key-pair\") pod \"whisker-6db66bb6fb-5fmxw\" (UID: \"71fd8f09-1ec1-4a2b-a495-70eb0d66adad\") " pod="calico-system/whisker-6db66bb6fb-5fmxw" Nov 23 22:56:17.092187 kubelet[2763]: I1123 22:56:17.092103 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71fd8f09-1ec1-4a2b-a495-70eb0d66adad-whisker-ca-bundle\") pod \"whisker-6db66bb6fb-5fmxw\" (UID: \"71fd8f09-1ec1-4a2b-a495-70eb0d66adad\") " pod="calico-system/whisker-6db66bb6fb-5fmxw" Nov 23 22:56:17.093002 kubelet[2763]: I1123 22:56:17.092297 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfwt\" (UniqueName: \"kubernetes.io/projected/71fd8f09-1ec1-4a2b-a495-70eb0d66adad-kube-api-access-qvfwt\") pod \"whisker-6db66bb6fb-5fmxw\" (UID: \"71fd8f09-1ec1-4a2b-a495-70eb0d66adad\") " pod="calico-system/whisker-6db66bb6fb-5fmxw" Nov 23 22:56:17.324992 containerd[1546]: time="2025-11-23T22:56:17.324886252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db66bb6fb-5fmxw,Uid:71fd8f09-1ec1-4a2b-a495-70eb0d66adad,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:17.589094 systemd-networkd[1425]: cali6ad72d9ae91: Link UP Nov 23 22:56:17.592303 systemd-networkd[1425]: cali6ad72d9ae91: Gained carrier Nov 23 22:56:17.619192 containerd[1546]: 2025-11-23 22:56:17.379 [INFO][3904] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 23 22:56:17.619192 containerd[1546]: 2025-11-23 22:56:17.435 [INFO][3904] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0 whisker-6db66bb6fb- calico-system 71fd8f09-1ec1-4a2b-a495-70eb0d66adad 944 0 2025-11-23 22:56:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6db66bb6fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 whisker-6db66bb6fb-5fmxw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6ad72d9ae91 [] [] }} ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-" Nov 23 22:56:17.619192 containerd[1546]: 2025-11-23 22:56:17.435 [INFO][3904] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.619192 containerd[1546]: 2025-11-23 22:56:17.505 [INFO][3967] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" HandleID="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Workload="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.506 [INFO][3967] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" HandleID="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Workload="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033f9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-5-0c65a92823", "pod":"whisker-6db66bb6fb-5fmxw", "timestamp":"2025-11-23 22:56:17.505431417 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.506 [INFO][3967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.506 [INFO][3967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.506 [INFO][3967] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.526 [INFO][3967] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.536 [INFO][3967] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.543 [INFO][3967] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.546 [INFO][3967] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.619925 containerd[1546]: 2025-11-23 22:56:17.550 [INFO][3967] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.551 [INFO][3967] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.553 [INFO][3967] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3 Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.561 [INFO][3967] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.569 [INFO][3967] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.1/26] block=192.168.34.0/26 handle="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.570 [INFO][3967] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.1/26] handle="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.570 [INFO][3967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:17.620276 containerd[1546]: 2025-11-23 22:56:17.570 [INFO][3967] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.1/26] IPv6=[] ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" HandleID="k8s-pod-network.0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Workload="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.620417 containerd[1546]: 2025-11-23 22:56:17.575 [INFO][3904] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0", GenerateName:"whisker-6db66bb6fb-", Namespace:"calico-system", SelfLink:"", UID:"71fd8f09-1ec1-4a2b-a495-70eb0d66adad", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6db66bb6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"whisker-6db66bb6fb-5fmxw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ad72d9ae91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:17.620417 containerd[1546]: 2025-11-23 22:56:17.575 [INFO][3904] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.1/32] ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.620495 containerd[1546]: 2025-11-23 22:56:17.575 [INFO][3904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ad72d9ae91 ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.620495 containerd[1546]: 2025-11-23 22:56:17.593 [INFO][3904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.620539 containerd[1546]: 2025-11-23 22:56:17.596 [INFO][3904] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0", GenerateName:"whisker-6db66bb6fb-", Namespace:"calico-system", SelfLink:"", UID:"71fd8f09-1ec1-4a2b-a495-70eb0d66adad", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 56, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6db66bb6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3", Pod:"whisker-6db66bb6fb-5fmxw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6ad72d9ae91", MAC:"52:b3:40:18:9d:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:17.620589 containerd[1546]: 2025-11-23 22:56:17.615 [INFO][3904] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" Namespace="calico-system" Pod="whisker-6db66bb6fb-5fmxw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-whisker--6db66bb6fb--5fmxw-eth0" Nov 23 22:56:17.656097 kubelet[2763]: I1123 22:56:17.656004 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64bbc75f-74cd-4202-9b89-037fec03aca5" path="/var/lib/kubelet/pods/64bbc75f-74cd-4202-9b89-037fec03aca5/volumes" Nov 23 22:56:17.665824 containerd[1546]: time="2025-11-23T22:56:17.665768551Z" level=info msg="connecting to shim 0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3" address="unix:///run/containerd/s/3b145066244f47cb6d218b3fa311c90521d2bf970a494fcd6bba8f3b85e99f38" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:17.720005 systemd[1]: Started cri-containerd-0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3.scope - libcontainer container 0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3. Nov 23 22:56:17.807537 containerd[1546]: time="2025-11-23T22:56:17.807486326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db66bb6fb-5fmxw,Uid:71fd8f09-1ec1-4a2b-a495-70eb0d66adad,Namespace:calico-system,Attempt:0,} returns sandbox id \"0df4441c67db8cce64b48b59617ea8083c9c6f8bcead7eb2d0c1fdadecbcc6e3\"" Nov 23 22:56:17.810047 containerd[1546]: time="2025-11-23T22:56:17.809985515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:56:18.131878 systemd-networkd[1425]: vxlan.calico: Link UP Nov 23 22:56:18.131890 systemd-networkd[1425]: vxlan.calico: Gained carrier Nov 23 22:56:18.150447 containerd[1546]: time="2025-11-23T22:56:18.150399387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:18.152118 containerd[1546]: time="2025-11-23T22:56:18.152033020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:56:18.152118 containerd[1546]: time="2025-11-23T22:56:18.152077900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:56:18.152472 kubelet[2763]: E1123 22:56:18.152310 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:18.152472 kubelet[2763]: E1123 22:56:18.152358 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:18.165571 kubelet[2763]: E1123 22:56:18.165051 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0699b6371c9d444dac6521e58a9fef96,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:18.169571 containerd[1546]: time="2025-11-23T22:56:18.169483185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:56:18.522113 containerd[1546]: time="2025-11-23T22:56:18.521961579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:18.524485 containerd[1546]: time="2025-11-23T22:56:18.524412209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:56:18.524617 containerd[1546]: time="2025-11-23T22:56:18.524533288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:18.524869 kubelet[2763]: E1123 22:56:18.524820 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:18.524967 kubelet[2763]: E1123 22:56:18.524885 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:18.525144 kubelet[2763]: E1123 22:56:18.525019 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:18.526631 kubelet[2763]: E1123 22:56:18.526574 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:56:18.896574 kubelet[2763]: E1123 22:56:18.896423 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:56:19.154967 systemd-networkd[1425]: cali6ad72d9ae91: Gained IPv6LL Nov 23 22:56:19.858978 systemd-networkd[1425]: vxlan.calico: Gained IPv6LL Nov 23 22:56:20.642420 containerd[1546]: time="2025-11-23T22:56:20.641833245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c79f46457-wvsqw,Uid:5efde8bf-2f30-47b7-ac7d-0827fb837ab3,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:20.793008 systemd-networkd[1425]: cali72d0c2bbf00: Link UP Nov 23 22:56:20.793491 systemd-networkd[1425]: cali72d0c2bbf00: Gained carrier Nov 23 22:56:20.824988 containerd[1546]: 2025-11-23 22:56:20.689 [INFO][4138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0 calico-kube-controllers-5c79f46457- calico-system 5efde8bf-2f30-47b7-ac7d-0827fb837ab3 867 0 2025-11-23 22:55:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c79f46457 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 calico-kube-controllers-5c79f46457-wvsqw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali72d0c2bbf00 [] [] }} ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-" Nov 23 22:56:20.824988 containerd[1546]: 2025-11-23 22:56:20.689 [INFO][4138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.824988 containerd[1546]: 2025-11-23 22:56:20.727 [INFO][4149] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" HandleID="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.727 [INFO][4149] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" HandleID="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-5-0c65a92823", "pod":"calico-kube-controllers-5c79f46457-wvsqw", "timestamp":"2025-11-23 22:56:20.727006048 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.727 [INFO][4149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.727 [INFO][4149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.727 [INFO][4149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.737 [INFO][4149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.746 [INFO][4149] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.753 [INFO][4149] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.755 [INFO][4149] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.826897 containerd[1546]: 2025-11-23 22:56:20.760 [INFO][4149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.761 [INFO][4149] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.763 [INFO][4149] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008 Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.772 [INFO][4149] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.782 [INFO][4149] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.2/26] block=192.168.34.0/26 handle="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.782 [INFO][4149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.2/26] handle="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.782 [INFO][4149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:20.827129 containerd[1546]: 2025-11-23 22:56:20.782 [INFO][4149] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.2/26] IPv6=[] ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" HandleID="k8s-pod-network.ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.827264 containerd[1546]: 2025-11-23 22:56:20.786 [INFO][4138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0", GenerateName:"calico-kube-controllers-5c79f46457-", Namespace:"calico-system", SelfLink:"", UID:"5efde8bf-2f30-47b7-ac7d-0827fb837ab3", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c79f46457", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"calico-kube-controllers-5c79f46457-wvsqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72d0c2bbf00", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:20.827929 containerd[1546]: 2025-11-23 22:56:20.787 [INFO][4138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.2/32] ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.827929 containerd[1546]: 2025-11-23 22:56:20.787 [INFO][4138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72d0c2bbf00 ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.827929 containerd[1546]: 2025-11-23 22:56:20.792 [INFO][4138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.828304 containerd[1546]: 2025-11-23 22:56:20.794 [INFO][4138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0", GenerateName:"calico-kube-controllers-5c79f46457-", Namespace:"calico-system", SelfLink:"", UID:"5efde8bf-2f30-47b7-ac7d-0827fb837ab3", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c79f46457", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008", Pod:"calico-kube-controllers-5c79f46457-wvsqw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali72d0c2bbf00", MAC:"4a:e3:a1:03:94:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:20.828378 containerd[1546]: 2025-11-23 22:56:20.818 [INFO][4138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" Namespace="calico-system" Pod="calico-kube-controllers-5c79f46457-wvsqw" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--kube--controllers--5c79f46457--wvsqw-eth0" Nov 23 22:56:20.898831 containerd[1546]: time="2025-11-23T22:56:20.898601090Z" level=info msg="connecting to shim ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008" address="unix:///run/containerd/s/88c2e74f294f282048aaaecf142813bae291632ad2bb717f9dded2e29cc6c3dd" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:20.932025 systemd[1]: Started cri-containerd-ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008.scope - libcontainer container ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008. Nov 23 22:56:20.976966 containerd[1546]: time="2025-11-23T22:56:20.976905722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c79f46457-wvsqw,Uid:5efde8bf-2f30-47b7-ac7d-0827fb837ab3,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea85874579bc6addab7d88cb49fcc9c4b2047a05618fb326c03e4257f827b008\"" Nov 23 22:56:20.980032 containerd[1546]: time="2025-11-23T22:56:20.979975909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:56:21.307026 containerd[1546]: time="2025-11-23T22:56:21.306930999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:21.308902 containerd[1546]: time="2025-11-23T22:56:21.308783751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:56:21.308902 containerd[1546]: time="2025-11-23T22:56:21.308847711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:21.311671 kubelet[2763]: E1123 22:56:21.309233 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:21.311671 kubelet[2763]: E1123 22:56:21.309353 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:21.311671 kubelet[2763]: E1123 22:56:21.309577 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn6zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:21.311671 kubelet[2763]: E1123 22:56:21.311575 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:56:21.644141 containerd[1546]: time="2025-11-23T22:56:21.643771649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgjk,Uid:380e22d8-9465-4d71-9c40-a3eb5517c805,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:21.645655 containerd[1546]: time="2025-11-23T22:56:21.645574001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjft2,Uid:8226f51c-b67c-40ab-9e53-94d216a79ce7,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:21.646069 containerd[1546]: time="2025-11-23T22:56:21.646015800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvslk,Uid:1b7aa383-a5c2-41b3-8b51-d983e5ce1004,Namespace:kube-system,Attempt:0,}" Nov 23 22:56:21.646632 containerd[1546]: time="2025-11-23T22:56:21.646590397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ckjtj,Uid:91280d56-7002-4fda-b0e5-b372b6025512,Namespace:calico-system,Attempt:0,}" Nov 23 22:56:21.843560 systemd-networkd[1425]: cali72d0c2bbf00: Gained IPv6LL Nov 23 22:56:21.921250 kubelet[2763]: E1123 22:56:21.921141 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:56:22.028124 systemd-networkd[1425]: cali5b6b1f93c3c: Link UP Nov 23 22:56:22.031964 systemd-networkd[1425]: cali5b6b1f93c3c: Gained carrier Nov 23 22:56:22.073511 containerd[1546]: 2025-11-23 22:56:21.793 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0 coredns-668d6bf9bc- kube-system 1b7aa383-a5c2-41b3-8b51-d983e5ce1004 869 0 2025-11-23 22:55:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 coredns-668d6bf9bc-lvslk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b6b1f93c3c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-" Nov 23 22:56:22.073511 containerd[1546]: 2025-11-23 22:56:21.794 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.073511 containerd[1546]: 2025-11-23 22:56:21.917 [INFO][4270] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" HandleID="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Workload="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.917 [INFO][4270] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" HandleID="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Workload="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3c40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-2-5-0c65a92823", "pod":"coredns-668d6bf9bc-lvslk", "timestamp":"2025-11-23 22:56:21.917638119 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.917 [INFO][4270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.918 [INFO][4270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.918 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.972 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.984 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.990 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.993 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.074647 containerd[1546]: 2025-11-23 22:56:21.996 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:21.996 [INFO][4270] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:21.999 [INFO][4270] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0 Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:22.007 [INFO][4270] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:22.016 [INFO][4270] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.3/26] block=192.168.34.0/26 handle="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:22.016 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.3/26] handle="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:22.016 [INFO][4270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:22.075389 containerd[1546]: 2025-11-23 22:56:22.016 [INFO][4270] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.3/26] IPv6=[] ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" HandleID="k8s-pod-network.56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Workload="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.076271 containerd[1546]: 2025-11-23 22:56:22.022 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b7aa383-a5c2-41b3-8b51-d983e5ce1004", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"coredns-668d6bf9bc-lvslk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b6b1f93c3c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.076271 containerd[1546]: 2025-11-23 22:56:22.023 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.3/32] ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.076271 containerd[1546]: 2025-11-23 22:56:22.023 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b6b1f93c3c ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.076271 containerd[1546]: 2025-11-23 22:56:22.041 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.076271 containerd[1546]: 2025-11-23 22:56:22.045 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b7aa383-a5c2-41b3-8b51-d983e5ce1004", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0", Pod:"coredns-668d6bf9bc-lvslk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b6b1f93c3c", MAC:"52:e7:76:11:ce:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.076271 containerd[1546]: 2025-11-23 22:56:22.065 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvslk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lvslk-eth0" Nov 23 22:56:22.115221 containerd[1546]: time="2025-11-23T22:56:22.114803712Z" level=info msg="connecting to shim 56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0" address="unix:///run/containerd/s/cb22f160d0c9a8ab1563e6e38f5d3ddb8e5a77da8eaa47597bd37a0a5767b7bc" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:22.153267 systemd[1]: Started cri-containerd-56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0.scope - libcontainer container 56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0. Nov 23 22:56:22.168218 systemd-networkd[1425]: calid9ee48eaa84: Link UP Nov 23 22:56:22.169951 systemd-networkd[1425]: calid9ee48eaa84: Gained carrier Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:21.789 [INFO][4222] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0 goldmane-666569f655- calico-system 91280d56-7002-4fda-b0e5-b372b6025512 871 0 2025-11-23 22:55:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 goldmane-666569f655-ckjtj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid9ee48eaa84 [] [] }} ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:21.790 [INFO][4222] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:21.960 [INFO][4263] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" HandleID="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Workload="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:21.964 [INFO][4263] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" HandleID="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Workload="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400022cdb0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-5-0c65a92823", "pod":"goldmane-666569f655-ckjtj", "timestamp":"2025-11-23 22:56:21.96097762 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:21.965 [INFO][4263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.016 [INFO][4263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.017 [INFO][4263] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.072 [INFO][4263] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.083 [INFO][4263] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.094 [INFO][4263] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.104 [INFO][4263] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.110 [INFO][4263] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.111 [INFO][4263] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.115 [INFO][4263] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15 Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.123 [INFO][4263] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.133 [INFO][4263] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.4/26] block=192.168.34.0/26 handle="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.134 [INFO][4263] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.4/26] handle="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.134 [INFO][4263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:22.194641 containerd[1546]: 2025-11-23 22:56:22.135 [INFO][4263] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.4/26] IPv6=[] ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" HandleID="k8s-pod-network.7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Workload="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.196998 containerd[1546]: 2025-11-23 22:56:22.142 [INFO][4222] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"91280d56-7002-4fda-b0e5-b372b6025512", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"goldmane-666569f655-ckjtj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid9ee48eaa84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.196998 containerd[1546]: 2025-11-23 22:56:22.142 [INFO][4222] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.4/32] ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.196998 containerd[1546]: 2025-11-23 22:56:22.142 [INFO][4222] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid9ee48eaa84 ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.196998 containerd[1546]: 2025-11-23 22:56:22.169 [INFO][4222] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.196998 containerd[1546]: 2025-11-23 22:56:22.172 [INFO][4222] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"91280d56-7002-4fda-b0e5-b372b6025512", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15", Pod:"goldmane-666569f655-ckjtj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid9ee48eaa84", MAC:"22:59:a3:fb:c3:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.196998 containerd[1546]: 2025-11-23 22:56:22.189 [INFO][4222] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" Namespace="calico-system" Pod="goldmane-666569f655-ckjtj" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-goldmane--666569f655--ckjtj-eth0" Nov 23 22:56:22.258235 containerd[1546]: time="2025-11-23T22:56:22.258015010Z" level=info msg="connecting to shim 7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15" address="unix:///run/containerd/s/d13b507eb08adb9f641dccb59932fa89de4644f672b9f08626f7bd9c6699bcd5" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:22.269515 systemd-networkd[1425]: califa4b76184e1: Link UP Nov 23 22:56:22.271353 systemd-networkd[1425]: califa4b76184e1: Gained carrier Nov 23 22:56:22.295864 containerd[1546]: time="2025-11-23T22:56:22.295809576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvslk,Uid:1b7aa383-a5c2-41b3-8b51-d983e5ce1004,Namespace:kube-system,Attempt:0,} returns sandbox id \"56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0\"" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:21.798 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0 coredns-668d6bf9bc- kube-system 380e22d8-9465-4d71-9c40-a3eb5517c805 861 0 2025-11-23 22:55:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 coredns-668d6bf9bc-lsgjk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa4b76184e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:21.798 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:21.973 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" HandleID="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Workload="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:21.974 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" HandleID="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Workload="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb6c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-2-5-0c65a92823", "pod":"coredns-668d6bf9bc-lsgjk", "timestamp":"2025-11-23 22:56:21.973954247 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:21.974 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.134 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.134 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.177 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.190 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.207 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.214 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.220 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.221 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.225 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241 Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.237 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.248 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.5/26] block=192.168.34.0/26 handle="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.249 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.5/26] handle="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.251 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:22.309905 containerd[1546]: 2025-11-23 22:56:22.251 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.5/26] IPv6=[] ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" HandleID="k8s-pod-network.c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Workload="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.311628 containerd[1546]: 2025-11-23 22:56:22.264 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"380e22d8-9465-4d71-9c40-a3eb5517c805", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"coredns-668d6bf9bc-lsgjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa4b76184e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.311628 containerd[1546]: 2025-11-23 22:56:22.264 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.5/32] ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.311628 containerd[1546]: 2025-11-23 22:56:22.264 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa4b76184e1 ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.311628 containerd[1546]: 2025-11-23 22:56:22.274 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.311628 containerd[1546]: 2025-11-23 22:56:22.276 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"380e22d8-9465-4d71-9c40-a3eb5517c805", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241", Pod:"coredns-668d6bf9bc-lsgjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa4b76184e1", MAC:"ba:b2:0c:eb:4a:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.311628 containerd[1546]: 2025-11-23 22:56:22.300 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" Namespace="kube-system" Pod="coredns-668d6bf9bc-lsgjk" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-coredns--668d6bf9bc--lsgjk-eth0" Nov 23 22:56:22.313770 containerd[1546]: time="2025-11-23T22:56:22.313674703Z" level=info msg="CreateContainer within sandbox \"56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:56:22.329527 systemd[1]: Started cri-containerd-7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15.scope - libcontainer container 7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15. Nov 23 22:56:22.345642 containerd[1546]: time="2025-11-23T22:56:22.344981736Z" level=info msg="Container 2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:22.355933 containerd[1546]: time="2025-11-23T22:56:22.355469934Z" level=info msg="CreateContainer within sandbox \"56158438e0a423e11cac7ed227735d8bb443a47d6f1f42fb35d0f59d201be7f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539\"" Nov 23 22:56:22.357802 containerd[1546]: time="2025-11-23T22:56:22.357771284Z" level=info msg="StartContainer for \"2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539\"" Nov 23 22:56:22.359258 containerd[1546]: time="2025-11-23T22:56:22.359226198Z" level=info msg="connecting to shim 2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539" address="unix:///run/containerd/s/cb22f160d0c9a8ab1563e6e38f5d3ddb8e5a77da8eaa47597bd37a0a5767b7bc" protocol=ttrpc version=3 Nov 23 22:56:22.380475 containerd[1546]: time="2025-11-23T22:56:22.380428312Z" level=info msg="connecting to shim c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241" address="unix:///run/containerd/s/ec3d17c7604fc000560a6666bf127a3703cf09e3684d372d3bb2912ebab674e6" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:22.420888 systemd-networkd[1425]: calid56de307e2b: Link UP Nov 23 22:56:22.427950 systemd-networkd[1425]: calid56de307e2b: Gained carrier Nov 23 22:56:22.453918 systemd[1]: Started cri-containerd-2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539.scope - libcontainer container 2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539. Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:21.826 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0 csi-node-driver- calico-system 8226f51c-b67c-40ab-9e53-94d216a79ce7 758 0 2025-11-23 22:55:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 csi-node-driver-zjft2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid56de307e2b [] [] }} ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:21.826 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:21.973 [INFO][4280] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" HandleID="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Workload="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:21.975 [INFO][4280] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" HandleID="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Workload="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032bb70), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-2-5-0c65a92823", "pod":"csi-node-driver-zjft2", "timestamp":"2025-11-23 22:56:21.973902927 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:21.975 [INFO][4280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.255 [INFO][4280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.255 [INFO][4280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.289 [INFO][4280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.306 [INFO][4280] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.324 [INFO][4280] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.331 [INFO][4280] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.337 [INFO][4280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.337 [INFO][4280] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.340 [INFO][4280] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5 Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.349 [INFO][4280] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.387 [INFO][4280] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.6/26] block=192.168.34.0/26 handle="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.390 [INFO][4280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.6/26] handle="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.390 [INFO][4280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:22.472922 containerd[1546]: 2025-11-23 22:56:22.390 [INFO][4280] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.6/26] IPv6=[] ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" HandleID="k8s-pod-network.2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Workload="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.473541 containerd[1546]: 2025-11-23 22:56:22.406 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8226f51c-b67c-40ab-9e53-94d216a79ce7", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"csi-node-driver-zjft2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid56de307e2b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.473541 containerd[1546]: 2025-11-23 22:56:22.409 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.6/32] ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.473541 containerd[1546]: 2025-11-23 22:56:22.409 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid56de307e2b ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.473541 containerd[1546]: 2025-11-23 22:56:22.427 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.473541 containerd[1546]: 2025-11-23 22:56:22.431 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8226f51c-b67c-40ab-9e53-94d216a79ce7", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5", Pod:"csi-node-driver-zjft2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid56de307e2b", MAC:"5e:0a:f4:78:2d:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:22.473541 containerd[1546]: 2025-11-23 22:56:22.459 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" Namespace="calico-system" Pod="csi-node-driver-zjft2" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-csi--node--driver--zjft2-eth0" Nov 23 22:56:22.502341 containerd[1546]: time="2025-11-23T22:56:22.501952818Z" level=info msg="connecting to shim 2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5" address="unix:///run/containerd/s/97d8aa4090670ba8c589f91359ca3390c5ccce107878519b5bc2e89a8487e349" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:22.522052 systemd[1]: Started cri-containerd-c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241.scope - libcontainer container c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241. Nov 23 22:56:22.543992 systemd[1]: Started cri-containerd-2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5.scope - libcontainer container 2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5. Nov 23 22:56:22.596587 containerd[1546]: time="2025-11-23T22:56:22.596522233Z" level=info msg="StartContainer for \"2c3ab248e9b2b2215ba0ab96a25bdc2b497725d0616287a6123639aa6812d539\" returns successfully" Nov 23 22:56:22.602953 containerd[1546]: time="2025-11-23T22:56:22.602851488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lsgjk,Uid:380e22d8-9465-4d71-9c40-a3eb5517c805,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241\"" Nov 23 22:56:22.621136 containerd[1546]: time="2025-11-23T22:56:22.619320261Z" level=info msg="CreateContainer within sandbox \"c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 23 22:56:22.645770 containerd[1546]: time="2025-11-23T22:56:22.645709633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b6d4565b-wllpg,Uid:a5c13c52-4438-4f33-920f-ea52cca520b8,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:22.683838 containerd[1546]: time="2025-11-23T22:56:22.683795758Z" level=info msg="Container 01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:56:22.733816 containerd[1546]: time="2025-11-23T22:56:22.732626720Z" level=info msg="CreateContainer within sandbox \"c7229389d821fe4c196c9868ec4abb78a7da42338db7d9c799b910df3727b241\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41\"" Nov 23 22:56:22.736077 containerd[1546]: time="2025-11-23T22:56:22.733751675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-ckjtj,Uid:91280d56-7002-4fda-b0e5-b372b6025512,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e55d835fcb3238bca5e348bfaaae9f4f2ad7c71ca20b41e2741efc81ded0a15\"" Nov 23 22:56:22.738100 containerd[1546]: time="2025-11-23T22:56:22.738046578Z" level=info msg="StartContainer for \"01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41\"" Nov 23 22:56:22.747924 containerd[1546]: time="2025-11-23T22:56:22.747802658Z" level=info msg="connecting to shim 01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41" address="unix:///run/containerd/s/ec3d17c7604fc000560a6666bf127a3703cf09e3684d372d3bb2912ebab674e6" protocol=ttrpc version=3 Nov 23 22:56:22.748287 containerd[1546]: time="2025-11-23T22:56:22.748262656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:56:22.790310 containerd[1546]: time="2025-11-23T22:56:22.790265605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjft2,Uid:8226f51c-b67c-40ab-9e53-94d216a79ce7,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c41611e415efcda23fc8e7548385d24b83f2351d114646349b6e3e87dd780d5\"" Nov 23 22:56:22.811394 systemd[1]: Started cri-containerd-01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41.scope - libcontainer container 01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41. Nov 23 22:56:22.878278 containerd[1546]: time="2025-11-23T22:56:22.878225848Z" level=info msg="StartContainer for \"01ddfd0e2bf54f9b07feedd7c1daa74ab31618b78d4c6bd32c4d8e521b7f0c41\" returns successfully" Nov 23 22:56:22.934315 kubelet[2763]: E1123 22:56:22.934244 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:56:22.983043 systemd-networkd[1425]: caliec6956034cd: Link UP Nov 23 22:56:22.983850 systemd-networkd[1425]: caliec6956034cd: Gained carrier Nov 23 22:56:22.991431 kubelet[2763]: I1123 22:56:22.991276 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lvslk" podStartSLOduration=45.991255428 podStartE2EDuration="45.991255428s" podCreationTimestamp="2025-11-23 22:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:56:22.986712006 +0000 UTC m=+51.472830317" watchObservedRunningTime="2025-11-23 22:56:22.991255428 +0000 UTC m=+51.477373739" Nov 23 22:56:22.994824 kubelet[2763]: I1123 22:56:22.993911 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lsgjk" podStartSLOduration=45.993891177 podStartE2EDuration="45.993891177s" podCreationTimestamp="2025-11-23 22:55:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 22:56:22.955099335 +0000 UTC m=+51.441217686" watchObservedRunningTime="2025-11-23 22:56:22.993891177 +0000 UTC m=+51.480009448" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.829 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0 calico-apiserver-64b6d4565b- calico-apiserver a5c13c52-4438-4f33-920f-ea52cca520b8 868 0 2025-11-23 22:55:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64b6d4565b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 calico-apiserver-64b6d4565b-wllpg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec6956034cd [] [] }} ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.829 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.883 [INFO][4587] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" HandleID="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.883 [INFO][4587] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" HandleID="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-2-5-0c65a92823", "pod":"calico-apiserver-64b6d4565b-wllpg", "timestamp":"2025-11-23 22:56:22.883595466 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.883 [INFO][4587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.883 [INFO][4587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.883 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.897 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.905 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.915 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.934 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.939 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.940 [INFO][4587] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.947 [INFO][4587] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8 Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.959 [INFO][4587] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.972 [INFO][4587] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.7/26] block=192.168.34.0/26 handle="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.972 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.7/26] handle="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.972 [INFO][4587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:23.011241 containerd[1546]: 2025-11-23 22:56:22.973 [INFO][4587] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.7/26] IPv6=[] ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" HandleID="k8s-pod-network.148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.013397 containerd[1546]: 2025-11-23 22:56:22.976 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0", GenerateName:"calico-apiserver-64b6d4565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5c13c52-4438-4f33-920f-ea52cca520b8", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64b6d4565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"calico-apiserver-64b6d4565b-wllpg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec6956034cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:23.013397 containerd[1546]: 2025-11-23 22:56:22.977 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.7/32] ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.013397 containerd[1546]: 2025-11-23 22:56:22.977 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec6956034cd ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.013397 containerd[1546]: 2025-11-23 22:56:22.984 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.013397 containerd[1546]: 2025-11-23 22:56:22.986 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0", GenerateName:"calico-apiserver-64b6d4565b-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5c13c52-4438-4f33-920f-ea52cca520b8", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64b6d4565b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8", Pod:"calico-apiserver-64b6d4565b-wllpg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec6956034cd", MAC:"d2:c2:44:40:b2:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:23.013397 containerd[1546]: 2025-11-23 22:56:23.007 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" Namespace="calico-apiserver" Pod="calico-apiserver-64b6d4565b-wllpg" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--64b6d4565b--wllpg-eth0" Nov 23 22:56:23.053421 containerd[1546]: time="2025-11-23T22:56:23.052852860Z" level=info msg="connecting to shim 148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8" address="unix:///run/containerd/s/03f50d4ac7b25b96d218193653b9d7c73e82011edc627a4c41cfa257e7c889c2" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:23.085369 systemd[1]: Started cri-containerd-148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8.scope - libcontainer container 148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8. Nov 23 22:56:23.100987 containerd[1546]: time="2025-11-23T22:56:23.100928308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:23.102394 containerd[1546]: time="2025-11-23T22:56:23.102316262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:56:23.102780 containerd[1546]: time="2025-11-23T22:56:23.102359222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:23.102835 kubelet[2763]: E1123 22:56:23.102643 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:23.103833 kubelet[2763]: E1123 22:56:23.103661 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:23.104158 kubelet[2763]: E1123 22:56:23.104026 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h69hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:23.105290 containerd[1546]: time="2025-11-23T22:56:23.105055411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:56:23.106172 kubelet[2763]: E1123 22:56:23.106075 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:56:23.122960 systemd-networkd[1425]: cali5b6b1f93c3c: Gained IPv6LL Nov 23 22:56:23.159296 containerd[1546]: time="2025-11-23T22:56:23.159250394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64b6d4565b-wllpg,Uid:a5c13c52-4438-4f33-920f-ea52cca520b8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"148584411ce0fd99c87905e89a5a8467110cbbfaa6f0e1990feac9a959f8ecc8\"" Nov 23 22:56:23.449642 containerd[1546]: time="2025-11-23T22:56:23.449572469Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:23.451285 containerd[1546]: time="2025-11-23T22:56:23.451151143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:56:23.451453 containerd[1546]: time="2025-11-23T22:56:23.451258542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:56:23.451563 kubelet[2763]: E1123 22:56:23.451501 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:23.451621 kubelet[2763]: E1123 22:56:23.451584 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:23.452025 kubelet[2763]: E1123 22:56:23.451934 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:23.453602 containerd[1546]: time="2025-11-23T22:56:23.453534733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:23.507760 systemd-networkd[1425]: calid9ee48eaa84: Gained IPv6LL Nov 23 22:56:23.570953 systemd-networkd[1425]: califa4b76184e1: Gained IPv6LL Nov 23 22:56:23.643941 containerd[1546]: time="2025-11-23T22:56:23.643874570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-sks64,Uid:d80e7c98-e2c6-4469-b5eb-05d06ffc6880,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:23.764279 systemd-networkd[1425]: calid56de307e2b: Gained IPv6LL Nov 23 22:56:23.787694 containerd[1546]: time="2025-11-23T22:56:23.787642513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:23.789661 containerd[1546]: time="2025-11-23T22:56:23.789471506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:23.789661 containerd[1546]: time="2025-11-23T22:56:23.789505186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:23.790159 kubelet[2763]: E1123 22:56:23.790021 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:23.790310 kubelet[2763]: E1123 22:56:23.790285 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:23.790770 kubelet[2763]: E1123 22:56:23.790654 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:23.791459 containerd[1546]: time="2025-11-23T22:56:23.790998020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:56:23.791933 kubelet[2763]: E1123 22:56:23.791872 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:23.826146 systemd-networkd[1425]: cali8220c18fe60: Link UP Nov 23 22:56:23.827240 systemd-networkd[1425]: cali8220c18fe60: Gained carrier Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.693 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0 calico-apiserver-78cb8dfc4- calico-apiserver d80e7c98-e2c6-4469-b5eb-05d06ffc6880 865 0 2025-11-23 22:55:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78cb8dfc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 calico-apiserver-78cb8dfc4-sks64 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8220c18fe60 [] [] }} ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.694 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.733 [INFO][4684] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" HandleID="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.733 [INFO][4684] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" HandleID="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024aff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-2-5-0c65a92823", "pod":"calico-apiserver-78cb8dfc4-sks64", "timestamp":"2025-11-23 22:56:23.73360289 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.733 [INFO][4684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.733 [INFO][4684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.733 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.759 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.768 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.776 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.782 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.786 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.786 [INFO][4684] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.791 [INFO][4684] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48 Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.802 [INFO][4684] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.816 [INFO][4684] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.8/26] block=192.168.34.0/26 handle="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.816 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.8/26] handle="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.816 [INFO][4684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:23.846747 containerd[1546]: 2025-11-23 22:56:23.816 [INFO][4684] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.8/26] IPv6=[] ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" HandleID="k8s-pod-network.dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.848036 containerd[1546]: 2025-11-23 22:56:23.820 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0", GenerateName:"calico-apiserver-78cb8dfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d80e7c98-e2c6-4469-b5eb-05d06ffc6880", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cb8dfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"calico-apiserver-78cb8dfc4-sks64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8220c18fe60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:23.848036 containerd[1546]: 2025-11-23 22:56:23.822 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.8/32] ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.848036 containerd[1546]: 2025-11-23 22:56:23.822 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8220c18fe60 ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.848036 containerd[1546]: 2025-11-23 22:56:23.827 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.848036 containerd[1546]: 2025-11-23 22:56:23.829 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0", GenerateName:"calico-apiserver-78cb8dfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"d80e7c98-e2c6-4469-b5eb-05d06ffc6880", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cb8dfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48", Pod:"calico-apiserver-78cb8dfc4-sks64", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8220c18fe60", MAC:"ce:cf:46:57:60:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:23.848036 containerd[1546]: 2025-11-23 22:56:23.842 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-sks64" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--sks64-eth0" Nov 23 22:56:23.879760 containerd[1546]: time="2025-11-23T22:56:23.879028906Z" level=info msg="connecting to shim dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48" address="unix:///run/containerd/s/957998673181f7777036c4b324b3e532207baac32adafe8d92c96e1dccdb8944" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:23.913095 systemd[1]: Started cri-containerd-dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48.scope - libcontainer container dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48. Nov 23 22:56:23.938457 kubelet[2763]: E1123 22:56:23.938301 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:23.941615 kubelet[2763]: E1123 22:56:23.941565 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:56:24.011896 containerd[1546]: time="2025-11-23T22:56:24.011769135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-sks64,Uid:d80e7c98-e2c6-4469-b5eb-05d06ffc6880,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dc5e922efba75cb8e2d31503a792783fa4552fd0d0c6bdb03bdfd8fd6a704e48\"" Nov 23 22:56:24.146093 containerd[1546]: time="2025-11-23T22:56:24.145779524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:24.149787 containerd[1546]: time="2025-11-23T22:56:24.149703668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:56:24.149934 containerd[1546]: time="2025-11-23T22:56:24.149747508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:56:24.150129 kubelet[2763]: E1123 22:56:24.150011 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:24.150129 kubelet[2763]: E1123 22:56:24.150123 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:24.150402 kubelet[2763]: E1123 22:56:24.150359 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:24.151173 containerd[1546]: time="2025-11-23T22:56:24.150693545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:24.151929 kubelet[2763]: E1123 22:56:24.151834 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:24.489916 containerd[1546]: time="2025-11-23T22:56:24.489646643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:24.492017 containerd[1546]: time="2025-11-23T22:56:24.491928434Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:24.492380 containerd[1546]: time="2025-11-23T22:56:24.492306792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:24.494217 kubelet[2763]: E1123 22:56:24.492792 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:24.494217 kubelet[2763]: E1123 22:56:24.492869 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:24.494217 kubelet[2763]: E1123 22:56:24.493044 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qq2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-sks64_calico-apiserver(d80e7c98-e2c6-4469-b5eb-05d06ffc6880): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:24.495384 kubelet[2763]: E1123 22:56:24.494619 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:56:24.643429 containerd[1546]: time="2025-11-23T22:56:24.643312914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-tz5zf,Uid:adda981c-9ce7-4e01-b56b-dc8bfccf049e,Namespace:calico-apiserver,Attempt:0,}" Nov 23 22:56:24.826771 systemd-networkd[1425]: califa01d604cf3: Link UP Nov 23 22:56:24.827055 systemd-networkd[1425]: califa01d604cf3: Gained carrier Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.706 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0 calico-apiserver-78cb8dfc4- calico-apiserver adda981c-9ce7-4e01-b56b-dc8bfccf049e 872 0 2025-11-23 22:55:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78cb8dfc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-2-5-0c65a92823 calico-apiserver-78cb8dfc4-tz5zf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califa01d604cf3 [] [] }} ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.706 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.742 [INFO][4760] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" HandleID="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.744 [INFO][4760] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" HandleID="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-2-5-0c65a92823", "pod":"calico-apiserver-78cb8dfc4-tz5zf", "timestamp":"2025-11-23 22:56:24.74296344 +0000 UTC"}, Hostname:"ci-4459-1-2-5-0c65a92823", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.745 [INFO][4760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.745 [INFO][4760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.745 [INFO][4760] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-2-5-0c65a92823' Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.765 [INFO][4760] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.772 [INFO][4760] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.780 [INFO][4760] ipam/ipam.go 511: Trying affinity for 192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.783 [INFO][4760] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.790 [INFO][4760] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.0/26 host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.790 [INFO][4760] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.0/26 handle="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.794 [INFO][4760] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.801 [INFO][4760] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.0/26 handle="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.812 [INFO][4760] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.9/26] block=192.168.34.0/26 handle="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.812 [INFO][4760] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.9/26] handle="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" host="ci-4459-1-2-5-0c65a92823" Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.812 [INFO][4760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 23 22:56:24.845378 containerd[1546]: 2025-11-23 22:56:24.812 [INFO][4760] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.9/26] IPv6=[] ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" HandleID="k8s-pod-network.8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Workload="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.847412 containerd[1546]: 2025-11-23 22:56:24.816 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0", GenerateName:"calico-apiserver-78cb8dfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"adda981c-9ce7-4e01-b56b-dc8bfccf049e", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cb8dfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"", Pod:"calico-apiserver-78cb8dfc4-tz5zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa01d604cf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:24.847412 containerd[1546]: 2025-11-23 22:56:24.818 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.9/32] ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.847412 containerd[1546]: 2025-11-23 22:56:24.818 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa01d604cf3 ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.847412 containerd[1546]: 2025-11-23 22:56:24.826 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.847412 containerd[1546]: 2025-11-23 22:56:24.828 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0", GenerateName:"calico-apiserver-78cb8dfc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"adda981c-9ce7-4e01-b56b-dc8bfccf049e", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.November, 23, 22, 55, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78cb8dfc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-2-5-0c65a92823", ContainerID:"8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a", Pod:"calico-apiserver-78cb8dfc4-tz5zf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califa01d604cf3", MAC:"e6:f9:6d:aa:00:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 23 22:56:24.847412 containerd[1546]: 2025-11-23 22:56:24.840 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" Namespace="calico-apiserver" Pod="calico-apiserver-78cb8dfc4-tz5zf" WorkloadEndpoint="ci--4459--1--2--5--0c65a92823-k8s-calico--apiserver--78cb8dfc4--tz5zf-eth0" Nov 23 22:56:24.883553 containerd[1546]: time="2025-11-23T22:56:24.883487003Z" level=info msg="connecting to shim 8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a" address="unix:///run/containerd/s/41437a06ec97e531c831dead9be647ec20900ad37aa00aa191532757a4304b98" namespace=k8s.io protocol=ttrpc version=3 Nov 23 22:56:24.914944 systemd-networkd[1425]: caliec6956034cd: Gained IPv6LL Nov 23 22:56:24.932080 systemd[1]: Started cri-containerd-8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a.scope - libcontainer container 8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a. Nov 23 22:56:24.961108 kubelet[2763]: E1123 22:56:24.961026 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:56:24.961458 kubelet[2763]: E1123 22:56:24.961179 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:24.961886 kubelet[2763]: E1123 22:56:24.961841 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:25.024427 containerd[1546]: time="2025-11-23T22:56:25.024376447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78cb8dfc4-tz5zf,Uid:adda981c-9ce7-4e01-b56b-dc8bfccf049e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8ce847a0bb72a23f5e8225c0b6c4bf4bfa3521df669ead986dc3efefd055eb4a\"" Nov 23 22:56:25.027418 containerd[1546]: time="2025-11-23T22:56:25.027164556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:25.351761 containerd[1546]: time="2025-11-23T22:56:25.351646167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:25.354621 containerd[1546]: time="2025-11-23T22:56:25.354539396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:25.354918 containerd[1546]: time="2025-11-23T22:56:25.354579475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:25.355980 kubelet[2763]: E1123 22:56:25.355283 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:25.355980 kubelet[2763]: E1123 22:56:25.355497 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:25.355980 kubelet[2763]: E1123 22:56:25.355667 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xndw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-tz5zf_calico-apiserver(adda981c-9ce7-4e01-b56b-dc8bfccf049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:25.356878 kubelet[2763]: E1123 22:56:25.356823 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:56:25.619117 systemd-networkd[1425]: cali8220c18fe60: Gained IPv6LL Nov 23 22:56:25.962000 kubelet[2763]: E1123 22:56:25.961911 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:56:25.962860 kubelet[2763]: E1123 22:56:25.962354 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:56:26.835092 systemd-networkd[1425]: califa01d604cf3: Gained IPv6LL Nov 23 22:56:26.964999 kubelet[2763]: E1123 22:56:26.964898 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:56:31.647342 containerd[1546]: time="2025-11-23T22:56:31.646481093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:56:31.984657 containerd[1546]: time="2025-11-23T22:56:31.984552651Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:31.986044 containerd[1546]: time="2025-11-23T22:56:31.985977445Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:56:31.986285 containerd[1546]: time="2025-11-23T22:56:31.986062925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:56:31.986461 kubelet[2763]: E1123 22:56:31.986429 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:31.986972 kubelet[2763]: E1123 22:56:31.986782 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:31.986972 kubelet[2763]: E1123 22:56:31.986933 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0699b6371c9d444dac6521e58a9fef96,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:31.989442 containerd[1546]: time="2025-11-23T22:56:31.989414993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:56:32.322331 containerd[1546]: time="2025-11-23T22:56:32.322200579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:32.324038 containerd[1546]: time="2025-11-23T22:56:32.323959053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:56:32.324183 containerd[1546]: time="2025-11-23T22:56:32.324087613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:32.324372 kubelet[2763]: E1123 22:56:32.324332 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:32.324485 kubelet[2763]: E1123 22:56:32.324459 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:32.325222 kubelet[2763]: E1123 22:56:32.325168 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:32.327085 kubelet[2763]: E1123 22:56:32.327002 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:56:36.643882 containerd[1546]: time="2025-11-23T22:56:36.643824278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:36.977651 containerd[1546]: time="2025-11-23T22:56:36.977553718Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:36.980658 containerd[1546]: time="2025-11-23T22:56:36.979376287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:36.980658 containerd[1546]: time="2025-11-23T22:56:36.979485370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:36.980658 containerd[1546]: time="2025-11-23T22:56:36.980158108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:56:36.980914 kubelet[2763]: E1123 22:56:36.979700 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:36.980914 kubelet[2763]: E1123 22:56:36.979779 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:36.980914 kubelet[2763]: E1123 22:56:36.980628 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:36.983780 kubelet[2763]: E1123 22:56:36.982760 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:37.317530 containerd[1546]: time="2025-11-23T22:56:37.317381017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:37.319255 containerd[1546]: time="2025-11-23T22:56:37.319116022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:56:37.319255 containerd[1546]: time="2025-11-23T22:56:37.319241586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:37.319492 kubelet[2763]: E1123 22:56:37.319438 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:37.319549 kubelet[2763]: E1123 22:56:37.319509 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:56:37.319882 kubelet[2763]: E1123 22:56:37.319811 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn6zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:37.320844 containerd[1546]: time="2025-11-23T22:56:37.320801907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:37.321371 kubelet[2763]: E1123 22:56:37.321297 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:56:37.657614 containerd[1546]: time="2025-11-23T22:56:37.657583146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:37.666411 containerd[1546]: time="2025-11-23T22:56:37.665801322Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:37.666411 containerd[1546]: time="2025-11-23T22:56:37.665900725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:37.666562 kubelet[2763]: E1123 22:56:37.666024 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:37.666562 kubelet[2763]: E1123 22:56:37.666081 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:37.666562 kubelet[2763]: E1123 22:56:37.666315 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qq2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-sks64_calico-apiserver(d80e7c98-e2c6-4469-b5eb-05d06ffc6880): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:37.669761 kubelet[2763]: E1123 22:56:37.669592 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:56:37.671643 containerd[1546]: time="2025-11-23T22:56:37.671532072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:56:38.020859 containerd[1546]: time="2025-11-23T22:56:38.020473895Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:38.022543 containerd[1546]: time="2025-11-23T22:56:38.022464706Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:56:38.022934 containerd[1546]: time="2025-11-23T22:56:38.022593389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:56:38.023188 kubelet[2763]: E1123 22:56:38.023142 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:38.023807 kubelet[2763]: E1123 22:56:38.023545 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:56:38.023807 kubelet[2763]: E1123 22:56:38.023709 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:38.026788 containerd[1546]: time="2025-11-23T22:56:38.026458648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:56:38.372121 containerd[1546]: time="2025-11-23T22:56:38.371480464Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:38.373746 containerd[1546]: time="2025-11-23T22:56:38.373651519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:56:38.373854 containerd[1546]: time="2025-11-23T22:56:38.373792003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:56:38.374382 kubelet[2763]: E1123 22:56:38.374020 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:38.374382 kubelet[2763]: E1123 22:56:38.374149 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:56:38.374382 kubelet[2763]: E1123 22:56:38.374310 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:38.375839 kubelet[2763]: E1123 22:56:38.375785 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:38.645698 containerd[1546]: time="2025-11-23T22:56:38.645551755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:56:38.990094 containerd[1546]: time="2025-11-23T22:56:38.989997276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:38.991908 containerd[1546]: time="2025-11-23T22:56:38.991809563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:56:38.992052 containerd[1546]: time="2025-11-23T22:56:38.991880364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:38.992298 kubelet[2763]: E1123 22:56:38.992225 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:38.992372 kubelet[2763]: E1123 22:56:38.992308 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:56:38.992746 kubelet[2763]: E1123 22:56:38.992553 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h69hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:38.994134 kubelet[2763]: E1123 22:56:38.993919 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:56:41.645072 containerd[1546]: time="2025-11-23T22:56:41.644943884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:41.984043 containerd[1546]: time="2025-11-23T22:56:41.983843369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:41.985915 containerd[1546]: time="2025-11-23T22:56:41.985760654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:41.986068 containerd[1546]: time="2025-11-23T22:56:41.985880457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:41.986743 kubelet[2763]: E1123 22:56:41.986636 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:41.987206 kubelet[2763]: E1123 22:56:41.986747 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:41.987206 kubelet[2763]: E1123 22:56:41.986947 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xndw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-tz5zf_calico-apiserver(adda981c-9ce7-4e01-b56b-dc8bfccf049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:41.988758 kubelet[2763]: E1123 22:56:41.988577 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:56:46.644117 kubelet[2763]: E1123 22:56:46.643970 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:56:47.643327 kubelet[2763]: E1123 22:56:47.643197 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:48.643035 kubelet[2763]: E1123 22:56:48.642924 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:56:49.653712 kubelet[2763]: E1123 22:56:49.653398 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:56:52.644047 kubelet[2763]: E1123 22:56:52.643996 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:56:52.646164 kubelet[2763]: E1123 22:56:52.646119 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:56:57.645950 kubelet[2763]: E1123 22:56:57.644980 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:56:58.643975 containerd[1546]: time="2025-11-23T22:56:58.643906668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:56:58.982115 containerd[1546]: time="2025-11-23T22:56:58.982033910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:58.983703 containerd[1546]: time="2025-11-23T22:56:58.983526571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:56:58.983703 containerd[1546]: time="2025-11-23T22:56:58.983627292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:56:58.983884 kubelet[2763]: E1123 22:56:58.983830 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:58.984144 kubelet[2763]: E1123 22:56:58.983883 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:56:58.986187 kubelet[2763]: E1123 22:56:58.984178 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:58.986187 kubelet[2763]: E1123 22:56:58.985825 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:56:58.986747 containerd[1546]: time="2025-11-23T22:56:58.985925083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:56:59.323078 containerd[1546]: time="2025-11-23T22:56:59.322471170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:59.324762 containerd[1546]: time="2025-11-23T22:56:59.324617479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:56:59.325023 containerd[1546]: time="2025-11-23T22:56:59.324715800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:56:59.325312 kubelet[2763]: E1123 22:56:59.325239 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:59.325312 kubelet[2763]: E1123 22:56:59.325302 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:56:59.325465 kubelet[2763]: E1123 22:56:59.325421 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0699b6371c9d444dac6521e58a9fef96,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:59.328083 containerd[1546]: time="2025-11-23T22:56:59.328005444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:56:59.669074 containerd[1546]: time="2025-11-23T22:56:59.668889101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:56:59.670678 containerd[1546]: time="2025-11-23T22:56:59.670553883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:56:59.670678 containerd[1546]: time="2025-11-23T22:56:59.670639884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:56:59.671080 kubelet[2763]: E1123 22:56:59.670997 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:59.671145 kubelet[2763]: E1123 22:56:59.671094 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:56:59.671516 kubelet[2763]: E1123 22:56:59.671424 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:56:59.672992 kubelet[2763]: E1123 22:56:59.672911 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:57:00.646866 containerd[1546]: time="2025-11-23T22:57:00.646806624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:00.976029 containerd[1546]: time="2025-11-23T22:57:00.975963275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:00.977827 containerd[1546]: time="2025-11-23T22:57:00.977697497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:00.977986 containerd[1546]: time="2025-11-23T22:57:00.977820218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:00.978663 kubelet[2763]: E1123 22:57:00.978097 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:00.978663 kubelet[2763]: E1123 22:57:00.978146 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:00.978663 kubelet[2763]: E1123 22:57:00.978288 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qq2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-sks64_calico-apiserver(d80e7c98-e2c6-4469-b5eb-05d06ffc6880): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:00.979836 kubelet[2763]: E1123 22:57:00.979794 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:57:01.646332 containerd[1546]: time="2025-11-23T22:57:01.646183196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:57:01.986255 containerd[1546]: time="2025-11-23T22:57:01.986186132Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:01.987356 containerd[1546]: time="2025-11-23T22:57:01.987302426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:57:01.987507 containerd[1546]: time="2025-11-23T22:57:01.987420787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:01.987670 kubelet[2763]: E1123 22:57:01.987622 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:01.988060 kubelet[2763]: E1123 22:57:01.987690 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:01.988410 kubelet[2763]: E1123 22:57:01.988324 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn6zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:01.989531 kubelet[2763]: E1123 22:57:01.989484 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:57:06.643031 containerd[1546]: time="2025-11-23T22:57:06.642413539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:57:06.981213 containerd[1546]: time="2025-11-23T22:57:06.981154097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:06.982496 containerd[1546]: time="2025-11-23T22:57:06.982415390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:57:06.982623 containerd[1546]: time="2025-11-23T22:57:06.982453831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:06.983059 kubelet[2763]: E1123 22:57:06.982777 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:57:06.983059 kubelet[2763]: E1123 22:57:06.982836 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:57:06.983059 kubelet[2763]: E1123 22:57:06.982985 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h69hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:06.984584 kubelet[2763]: E1123 22:57:06.984527 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:57:07.645896 containerd[1546]: time="2025-11-23T22:57:07.645825548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:57:07.998840 containerd[1546]: time="2025-11-23T22:57:07.998769983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:08.000283 containerd[1546]: time="2025-11-23T22:57:08.000214318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:57:08.000385 containerd[1546]: time="2025-11-23T22:57:08.000319959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:57:08.000682 kubelet[2763]: E1123 22:57:08.000622 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:57:08.000989 kubelet[2763]: E1123 22:57:08.000691 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:57:08.001825 kubelet[2763]: E1123 22:57:08.001760 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:08.005526 containerd[1546]: time="2025-11-23T22:57:08.005431130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:57:08.354262 containerd[1546]: time="2025-11-23T22:57:08.353710448Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:08.355666 containerd[1546]: time="2025-11-23T22:57:08.355620547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:57:08.355784 containerd[1546]: time="2025-11-23T22:57:08.355718308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:57:08.356035 kubelet[2763]: E1123 22:57:08.355979 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:57:08.356092 kubelet[2763]: E1123 22:57:08.356037 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:57:08.356187 kubelet[2763]: E1123 22:57:08.356146 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:08.357794 kubelet[2763]: E1123 22:57:08.357667 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:57:10.643742 containerd[1546]: time="2025-11-23T22:57:10.643622346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:10.987560 containerd[1546]: time="2025-11-23T22:57:10.987344893Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:10.988887 containerd[1546]: time="2025-11-23T22:57:10.988749506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:10.988887 containerd[1546]: time="2025-11-23T22:57:10.988856987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:10.989236 kubelet[2763]: E1123 22:57:10.989183 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:10.989599 kubelet[2763]: E1123 22:57:10.989244 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:10.989599 kubelet[2763]: E1123 22:57:10.989367 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xndw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-tz5zf_calico-apiserver(adda981c-9ce7-4e01-b56b-dc8bfccf049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:10.991044 kubelet[2763]: E1123 22:57:10.990644 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:57:12.643321 kubelet[2763]: E1123 22:57:12.642919 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:57:12.644518 kubelet[2763]: E1123 22:57:12.644338 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:57:12.644518 kubelet[2763]: E1123 22:57:12.644371 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:57:16.643688 kubelet[2763]: E1123 22:57:16.643604 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:57:17.644741 kubelet[2763]: E1123 22:57:17.644277 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:57:20.649798 kubelet[2763]: E1123 22:57:20.649653 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:57:23.644523 kubelet[2763]: E1123 22:57:23.644450 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:57:26.643930 kubelet[2763]: E1123 22:57:26.643299 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:57:26.645643 kubelet[2763]: E1123 22:57:26.645462 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:57:27.645133 kubelet[2763]: E1123 22:57:27.645069 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:57:29.647854 kubelet[2763]: E1123 22:57:29.644324 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:57:31.645446 kubelet[2763]: E1123 22:57:31.643168 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:57:32.645228 kubelet[2763]: E1123 22:57:32.645109 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:57:35.646071 kubelet[2763]: E1123 22:57:35.645259 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:57:37.643938 kubelet[2763]: E1123 22:57:37.643866 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:57:37.647489 kubelet[2763]: E1123 22:57:37.647413 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:57:41.647424 containerd[1546]: time="2025-11-23T22:57:41.647336170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:41.983876 containerd[1546]: time="2025-11-23T22:57:41.983671609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:41.985191 containerd[1546]: time="2025-11-23T22:57:41.985124814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:41.985672 containerd[1546]: time="2025-11-23T22:57:41.985229894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:41.985767 kubelet[2763]: E1123 22:57:41.985494 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:41.985767 kubelet[2763]: E1123 22:57:41.985566 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:41.987157 kubelet[2763]: E1123 22:57:41.985794 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:41.987157 kubelet[2763]: E1123 22:57:41.986964 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:57:42.643536 containerd[1546]: time="2025-11-23T22:57:42.643394210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:57:42.975409 containerd[1546]: time="2025-11-23T22:57:42.975149237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:42.976850 containerd[1546]: time="2025-11-23T22:57:42.976780763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:57:42.977149 containerd[1546]: time="2025-11-23T22:57:42.976833123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:42.977331 kubelet[2763]: E1123 22:57:42.977271 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:42.977331 kubelet[2763]: E1123 22:57:42.977327 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:57:42.978370 kubelet[2763]: E1123 22:57:42.977769 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn6zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:42.979866 kubelet[2763]: E1123 22:57:42.979810 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:57:43.645198 kubelet[2763]: E1123 22:57:43.643927 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:57:45.646319 kubelet[2763]: E1123 22:57:45.646238 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:57:48.643549 containerd[1546]: time="2025-11-23T22:57:48.643224062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:57:49.181489 containerd[1546]: time="2025-11-23T22:57:49.181423384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:49.183146 containerd[1546]: time="2025-11-23T22:57:49.182994029Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:57:49.183146 containerd[1546]: time="2025-11-23T22:57:49.183055509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:57:49.183369 kubelet[2763]: E1123 22:57:49.183252 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:57:49.183369 kubelet[2763]: E1123 22:57:49.183303 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:57:49.183905 kubelet[2763]: E1123 22:57:49.183406 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0699b6371c9d444dac6521e58a9fef96,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:49.187168 containerd[1546]: time="2025-11-23T22:57:49.186983841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:57:49.526713 containerd[1546]: time="2025-11-23T22:57:49.526595672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:49.528708 containerd[1546]: time="2025-11-23T22:57:49.527955756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:57:49.528708 containerd[1546]: time="2025-11-23T22:57:49.528052797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:57:49.529994 kubelet[2763]: E1123 22:57:49.529148 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:57:49.529994 kubelet[2763]: E1123 22:57:49.529207 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:57:49.530313 kubelet[2763]: E1123 22:57:49.530255 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:49.532401 kubelet[2763]: E1123 22:57:49.532323 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:57:49.645193 kubelet[2763]: E1123 22:57:49.644841 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:57:50.191668 systemd[1]: Started sshd@7-188.245.196.203:22-115.231.78.11:30000.service - OpenSSH per-connection server daemon (115.231.78.11:30000). Nov 23 22:57:51.320238 sshd[4960]: Invalid user username from 115.231.78.11 port 30000 Nov 23 22:57:51.521796 sshd[4960]: Connection closed by invalid user username 115.231.78.11 port 30000 [preauth] Nov 23 22:57:51.526062 systemd[1]: sshd@7-188.245.196.203:22-115.231.78.11:30000.service: Deactivated successfully. Nov 23 22:57:52.648186 containerd[1546]: time="2025-11-23T22:57:52.647884466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:57:52.993088 containerd[1546]: time="2025-11-23T22:57:52.992904111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:52.994951 containerd[1546]: time="2025-11-23T22:57:52.994817236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:57:52.994951 containerd[1546]: time="2025-11-23T22:57:52.994830716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:52.995397 kubelet[2763]: E1123 22:57:52.995105 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:52.995397 kubelet[2763]: E1123 22:57:52.995211 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:57:52.996043 kubelet[2763]: E1123 22:57:52.995382 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2qq2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-sks64_calico-apiserver(d80e7c98-e2c6-4469-b5eb-05d06ffc6880): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:52.997031 kubelet[2763]: E1123 22:57:52.996989 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:57:53.645753 kubelet[2763]: E1123 22:57:53.644414 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:57:54.643378 containerd[1546]: time="2025-11-23T22:57:54.643114924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:57:54.993216 containerd[1546]: time="2025-11-23T22:57:54.992886329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:57:54.994257 containerd[1546]: time="2025-11-23T22:57:54.994168293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:57:54.995049 containerd[1546]: time="2025-11-23T22:57:54.994696094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:57:54.995948 kubelet[2763]: E1123 22:57:54.995860 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:57:54.995948 kubelet[2763]: E1123 22:57:54.995942 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:57:54.998357 kubelet[2763]: E1123 22:57:54.996212 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h69hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:57:54.998357 kubelet[2763]: E1123 22:57:54.997663 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:57:58.644222 kubelet[2763]: E1123 22:57:58.644074 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:58:00.644581 containerd[1546]: time="2025-11-23T22:58:00.644428457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 23 22:58:00.982913 containerd[1546]: time="2025-11-23T22:58:00.982850741Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:00.985433 containerd[1546]: time="2025-11-23T22:58:00.984774866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 23 22:58:00.986767 containerd[1546]: time="2025-11-23T22:58:00.985684028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 23 22:58:00.986767 containerd[1546]: time="2025-11-23T22:58:00.986593789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:58:00.986913 kubelet[2763]: E1123 22:58:00.985871 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:58:00.986913 kubelet[2763]: E1123 22:58:00.985925 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 23 22:58:00.986913 kubelet[2763]: E1123 22:58:00.986162 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:01.319837 containerd[1546]: time="2025-11-23T22:58:01.319645844Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:01.321391 containerd[1546]: time="2025-11-23T22:58:01.321319968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:58:01.321556 containerd[1546]: time="2025-11-23T22:58:01.321459368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:58:01.322039 kubelet[2763]: E1123 22:58:01.321788 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:58:01.322039 kubelet[2763]: E1123 22:58:01.321846 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:58:01.322331 kubelet[2763]: E1123 22:58:01.322245 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xndw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-78cb8dfc4-tz5zf_calico-apiserver(adda981c-9ce7-4e01-b56b-dc8bfccf049e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:01.322696 containerd[1546]: time="2025-11-23T22:58:01.322569090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 23 22:58:01.324157 kubelet[2763]: E1123 22:58:01.324112 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:58:01.662710 containerd[1546]: time="2025-11-23T22:58:01.662657039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:58:01.664143 containerd[1546]: time="2025-11-23T22:58:01.664085202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 23 22:58:01.664432 containerd[1546]: time="2025-11-23T22:58:01.664177482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 23 22:58:01.664770 kubelet[2763]: E1123 22:58:01.664317 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:58:01.664770 kubelet[2763]: E1123 22:58:01.664372 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 23 22:58:01.664770 kubelet[2763]: E1123 22:58:01.664533 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4prqc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zjft2_calico-system(8226f51c-b67c-40ab-9e53-94d216a79ce7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 23 22:58:01.665835 kubelet[2763]: E1123 22:58:01.665684 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:58:03.646085 kubelet[2763]: E1123 22:58:03.646012 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:58:05.644888 kubelet[2763]: E1123 22:58:05.643394 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:58:06.643856 kubelet[2763]: E1123 22:58:06.642778 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:58:07.644752 kubelet[2763]: E1123 22:58:07.644527 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:58:09.107951 systemd[1]: Started sshd@8-188.245.196.203:22-139.178.89.65:47224.service - OpenSSH per-connection server daemon (139.178.89.65:47224). Nov 23 22:58:10.092818 sshd[4994]: Accepted publickey for core from 139.178.89.65 port 47224 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:10.094398 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:10.101108 systemd-logind[1528]: New session 8 of user core. Nov 23 22:58:10.107017 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 23 22:58:10.886065 sshd[4997]: Connection closed by 139.178.89.65 port 47224 Nov 23 22:58:10.887040 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:10.894827 systemd[1]: sshd@8-188.245.196.203:22-139.178.89.65:47224.service: Deactivated successfully. Nov 23 22:58:10.897718 systemd[1]: session-8.scope: Deactivated successfully. Nov 23 22:58:10.899637 systemd-logind[1528]: Session 8 logged out. Waiting for processes to exit. Nov 23 22:58:10.901664 systemd-logind[1528]: Removed session 8. Nov 23 22:58:12.644628 kubelet[2763]: E1123 22:58:12.644529 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:58:12.646246 kubelet[2763]: E1123 22:58:12.646091 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:58:13.646288 kubelet[2763]: E1123 22:58:13.646171 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:58:14.648431 kubelet[2763]: E1123 22:58:14.648271 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:58:16.056267 systemd[1]: Started sshd@9-188.245.196.203:22-139.178.89.65:54376.service - OpenSSH per-connection server daemon (139.178.89.65:54376). Nov 23 22:58:17.031763 sshd[5011]: Accepted publickey for core from 139.178.89.65 port 54376 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:17.034663 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:17.043032 systemd-logind[1528]: New session 9 of user core. Nov 23 22:58:17.049981 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 23 22:58:17.788194 sshd[5041]: Connection closed by 139.178.89.65 port 54376 Nov 23 22:58:17.788836 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:17.795562 systemd[1]: sshd@9-188.245.196.203:22-139.178.89.65:54376.service: Deactivated successfully. Nov 23 22:58:17.800815 systemd[1]: session-9.scope: Deactivated successfully. Nov 23 22:58:17.802049 systemd-logind[1528]: Session 9 logged out. Waiting for processes to exit. Nov 23 22:58:17.806082 systemd-logind[1528]: Removed session 9. Nov 23 22:58:18.642785 kubelet[2763]: E1123 22:58:18.642679 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:58:19.646010 kubelet[2763]: E1123 22:58:19.645903 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:58:21.647810 kubelet[2763]: E1123 22:58:21.645656 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:58:22.955624 systemd[1]: Started sshd@10-188.245.196.203:22-139.178.89.65:37756.service - OpenSSH per-connection server daemon (139.178.89.65:37756). Nov 23 22:58:23.938567 sshd[5053]: Accepted publickey for core from 139.178.89.65 port 37756 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:23.940629 sshd-session[5053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:23.949638 systemd-logind[1528]: New session 10 of user core. Nov 23 22:58:23.951958 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 23 22:58:24.643249 kubelet[2763]: E1123 22:58:24.643199 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:58:24.697340 sshd[5060]: Connection closed by 139.178.89.65 port 37756 Nov 23 22:58:24.697246 sshd-session[5053]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:24.704062 systemd[1]: sshd@10-188.245.196.203:22-139.178.89.65:37756.service: Deactivated successfully. Nov 23 22:58:24.707315 systemd[1]: session-10.scope: Deactivated successfully. Nov 23 22:58:24.715478 systemd-logind[1528]: Session 10 logged out. Waiting for processes to exit. Nov 23 22:58:24.716735 systemd-logind[1528]: Removed session 10. Nov 23 22:58:24.867586 systemd[1]: Started sshd@11-188.245.196.203:22-139.178.89.65:37764.service - OpenSSH per-connection server daemon (139.178.89.65:37764). Nov 23 22:58:25.855473 sshd[5074]: Accepted publickey for core from 139.178.89.65 port 37764 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:25.858334 sshd-session[5074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:25.865527 systemd-logind[1528]: New session 11 of user core. Nov 23 22:58:25.873004 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 23 22:58:26.643180 kubelet[2763]: E1123 22:58:26.642985 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:58:26.643996 kubelet[2763]: E1123 22:58:26.643934 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:58:26.684359 sshd[5078]: Connection closed by 139.178.89.65 port 37764 Nov 23 22:58:26.682945 sshd-session[5074]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:26.690098 systemd[1]: sshd@11-188.245.196.203:22-139.178.89.65:37764.service: Deactivated successfully. Nov 23 22:58:26.693149 systemd[1]: session-11.scope: Deactivated successfully. Nov 23 22:58:26.694951 systemd-logind[1528]: Session 11 logged out. Waiting for processes to exit. Nov 23 22:58:26.697411 systemd-logind[1528]: Removed session 11. Nov 23 22:58:26.851804 systemd[1]: Started sshd@12-188.245.196.203:22-139.178.89.65:37774.service - OpenSSH per-connection server daemon (139.178.89.65:37774). Nov 23 22:58:27.646273 kubelet[2763]: E1123 22:58:27.646209 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:58:27.836615 sshd[5088]: Accepted publickey for core from 139.178.89.65 port 37774 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:27.838758 sshd-session[5088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:27.849895 systemd-logind[1528]: New session 12 of user core. Nov 23 22:58:27.854555 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 23 22:58:28.591106 sshd[5091]: Connection closed by 139.178.89.65 port 37774 Nov 23 22:58:28.592949 sshd-session[5088]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:28.600056 systemd[1]: sshd@12-188.245.196.203:22-139.178.89.65:37774.service: Deactivated successfully. Nov 23 22:58:28.604155 systemd[1]: session-12.scope: Deactivated successfully. Nov 23 22:58:28.608249 systemd-logind[1528]: Session 12 logged out. Waiting for processes to exit. Nov 23 22:58:28.611492 systemd-logind[1528]: Removed session 12. Nov 23 22:58:32.643779 kubelet[2763]: E1123 22:58:32.642578 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:58:33.642736 kubelet[2763]: E1123 22:58:33.642683 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:58:33.756300 systemd[1]: Started sshd@13-188.245.196.203:22-139.178.89.65:37156.service - OpenSSH per-connection server daemon (139.178.89.65:37156). Nov 23 22:58:34.745826 sshd[5105]: Accepted publickey for core from 139.178.89.65 port 37156 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:34.748169 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:34.755795 systemd-logind[1528]: New session 13 of user core. Nov 23 22:58:34.760986 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 23 22:58:35.521230 sshd[5108]: Connection closed by 139.178.89.65 port 37156 Nov 23 22:58:35.520165 sshd-session[5105]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:35.527347 systemd[1]: sshd@13-188.245.196.203:22-139.178.89.65:37156.service: Deactivated successfully. Nov 23 22:58:35.531708 systemd[1]: session-13.scope: Deactivated successfully. Nov 23 22:58:35.534700 systemd-logind[1528]: Session 13 logged out. Waiting for processes to exit. Nov 23 22:58:35.536651 systemd-logind[1528]: Removed session 13. Nov 23 22:58:35.686288 systemd[1]: Started sshd@14-188.245.196.203:22-139.178.89.65:37162.service - OpenSSH per-connection server daemon (139.178.89.65:37162). Nov 23 22:58:36.642766 kubelet[2763]: E1123 22:58:36.642456 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:58:36.642766 kubelet[2763]: E1123 22:58:36.642597 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:58:36.667291 sshd[5120]: Accepted publickey for core from 139.178.89.65 port 37162 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:36.669712 sshd-session[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:36.677459 systemd-logind[1528]: New session 14 of user core. Nov 23 22:58:36.683943 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 23 22:58:37.585290 sshd[5123]: Connection closed by 139.178.89.65 port 37162 Nov 23 22:58:37.586214 sshd-session[5120]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:37.593264 systemd[1]: session-14.scope: Deactivated successfully. Nov 23 22:58:37.594431 systemd[1]: sshd@14-188.245.196.203:22-139.178.89.65:37162.service: Deactivated successfully. Nov 23 22:58:37.603316 systemd-logind[1528]: Session 14 logged out. Waiting for processes to exit. Nov 23 22:58:37.606020 systemd-logind[1528]: Removed session 14. Nov 23 22:58:37.643301 kubelet[2763]: E1123 22:58:37.642774 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:58:37.754519 systemd[1]: Started sshd@15-188.245.196.203:22-139.178.89.65:37174.service - OpenSSH per-connection server daemon (139.178.89.65:37174). Nov 23 22:58:38.729754 sshd[5133]: Accepted publickey for core from 139.178.89.65 port 37174 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:38.731163 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:38.737547 systemd-logind[1528]: New session 15 of user core. Nov 23 22:58:38.747002 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 23 22:58:39.649640 kubelet[2763]: E1123 22:58:39.649489 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:58:40.109385 sshd[5138]: Connection closed by 139.178.89.65 port 37174 Nov 23 22:58:40.112462 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:40.118551 systemd[1]: sshd@15-188.245.196.203:22-139.178.89.65:37174.service: Deactivated successfully. Nov 23 22:58:40.123118 systemd[1]: session-15.scope: Deactivated successfully. Nov 23 22:58:40.125778 systemd-logind[1528]: Session 15 logged out. Waiting for processes to exit. Nov 23 22:58:40.128896 systemd-logind[1528]: Removed session 15. Nov 23 22:58:40.276654 systemd[1]: Started sshd@16-188.245.196.203:22-139.178.89.65:43270.service - OpenSSH per-connection server daemon (139.178.89.65:43270). Nov 23 22:58:40.645620 kubelet[2763]: E1123 22:58:40.644833 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:58:41.263646 sshd[5156]: Accepted publickey for core from 139.178.89.65 port 43270 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:41.266234 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:41.274463 systemd-logind[1528]: New session 16 of user core. Nov 23 22:58:41.280465 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 23 22:58:42.190269 sshd[5159]: Connection closed by 139.178.89.65 port 43270 Nov 23 22:58:42.190992 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:42.196566 systemd[1]: sshd@16-188.245.196.203:22-139.178.89.65:43270.service: Deactivated successfully. Nov 23 22:58:42.202379 systemd[1]: session-16.scope: Deactivated successfully. Nov 23 22:58:42.203948 systemd-logind[1528]: Session 16 logged out. Waiting for processes to exit. Nov 23 22:58:42.205543 systemd-logind[1528]: Removed session 16. Nov 23 22:58:42.359029 systemd[1]: Started sshd@17-188.245.196.203:22-139.178.89.65:43274.service - OpenSSH per-connection server daemon (139.178.89.65:43274). Nov 23 22:58:43.325158 sshd[5169]: Accepted publickey for core from 139.178.89.65 port 43274 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:43.327096 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:43.335425 systemd-logind[1528]: New session 17 of user core. Nov 23 22:58:43.340079 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 23 22:58:43.643659 kubelet[2763]: E1123 22:58:43.643494 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:58:44.120042 sshd[5172]: Connection closed by 139.178.89.65 port 43274 Nov 23 22:58:44.120819 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:44.127081 systemd[1]: sshd@17-188.245.196.203:22-139.178.89.65:43274.service: Deactivated successfully. Nov 23 22:58:44.131112 systemd[1]: session-17.scope: Deactivated successfully. Nov 23 22:58:44.135666 systemd-logind[1528]: Session 17 logged out. Waiting for processes to exit. Nov 23 22:58:44.137552 systemd-logind[1528]: Removed session 17. Nov 23 22:58:46.643754 kubelet[2763]: E1123 22:58:46.641914 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:58:48.644749 kubelet[2763]: E1123 22:58:48.642662 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:58:49.293040 systemd[1]: Started sshd@18-188.245.196.203:22-139.178.89.65:43284.service - OpenSSH per-connection server daemon (139.178.89.65:43284). Nov 23 22:58:49.644498 kubelet[2763]: E1123 22:58:49.642512 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:58:50.272154 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 43284 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:50.275766 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:50.284640 systemd-logind[1528]: New session 18 of user core. Nov 23 22:58:50.291953 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 23 22:58:50.644757 kubelet[2763]: E1123 22:58:50.643122 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:58:51.036012 sshd[5215]: Connection closed by 139.178.89.65 port 43284 Nov 23 22:58:51.036422 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:51.042759 systemd-logind[1528]: Session 18 logged out. Waiting for processes to exit. Nov 23 22:58:51.043105 systemd[1]: sshd@18-188.245.196.203:22-139.178.89.65:43284.service: Deactivated successfully. Nov 23 22:58:51.050255 systemd[1]: session-18.scope: Deactivated successfully. Nov 23 22:58:51.054478 systemd-logind[1528]: Removed session 18. Nov 23 22:58:51.649272 kubelet[2763]: E1123 22:58:51.649218 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:58:52.644062 kubelet[2763]: E1123 22:58:52.643923 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:58:56.213115 systemd[1]: Started sshd@19-188.245.196.203:22-139.178.89.65:58514.service - OpenSSH per-connection server daemon (139.178.89.65:58514). Nov 23 22:58:57.211402 sshd[5227]: Accepted publickey for core from 139.178.89.65 port 58514 ssh2: RSA SHA256:YIuyzm9dpKOhrVMPbKDgYZEDQEc4SEwyWuFw37ATQJ8 Nov 23 22:58:57.213334 sshd-session[5227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 23 22:58:57.220544 systemd-logind[1528]: New session 19 of user core. Nov 23 22:58:57.227010 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 23 22:58:58.000300 sshd[5230]: Connection closed by 139.178.89.65 port 58514 Nov 23 22:58:57.999516 sshd-session[5227]: pam_unix(sshd:session): session closed for user core Nov 23 22:58:58.004810 systemd[1]: sshd@19-188.245.196.203:22-139.178.89.65:58514.service: Deactivated successfully. Nov 23 22:58:58.009919 systemd[1]: session-19.scope: Deactivated successfully. Nov 23 22:58:58.014547 systemd-logind[1528]: Session 19 logged out. Waiting for processes to exit. Nov 23 22:58:58.017056 systemd-logind[1528]: Removed session 19. Nov 23 22:58:58.643771 kubelet[2763]: E1123 22:58:58.642678 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:58:59.648771 kubelet[2763]: E1123 22:58:59.648687 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:59:01.650049 kubelet[2763]: E1123 22:59:01.649963 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:59:02.642534 kubelet[2763]: E1123 22:59:02.642428 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:59:02.643805 kubelet[2763]: E1123 22:59:02.643625 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:59:02.643805 kubelet[2763]: E1123 22:59:02.643711 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:59:03.645662 kubelet[2763]: E1123 22:59:03.645220 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:59:09.643764 containerd[1546]: time="2025-11-23T22:59:09.643475246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 23 22:59:09.994006 containerd[1546]: time="2025-11-23T22:59:09.993703324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:09.996694 containerd[1546]: time="2025-11-23T22:59:09.995704048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 23 22:59:09.996694 containerd[1546]: time="2025-11-23T22:59:09.995837248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:09.996880 kubelet[2763]: E1123 22:59:09.996083 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:09.996880 kubelet[2763]: E1123 22:59:09.996180 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 23 22:59:09.996880 kubelet[2763]: E1123 22:59:09.996374 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6vqf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-64b6d4565b-wllpg_calico-apiserver(a5c13c52-4438-4f33-920f-ea52cca520b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:09.997671 kubelet[2763]: E1123 22:59:09.997611 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64b6d4565b-wllpg" podUID="a5c13c52-4438-4f33-920f-ea52cca520b8" Nov 23 22:59:11.644436 kubelet[2763]: E1123 22:59:11.644362 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-sks64" podUID="d80e7c98-e2c6-4469-b5eb-05d06ffc6880" Nov 23 22:59:13.562736 kubelet[2763]: E1123 22:59:13.562635 2763 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:42118->10.0.0.2:2379: read: connection timed out" Nov 23 22:59:13.572591 systemd[1]: cri-containerd-501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658.scope: Deactivated successfully. Nov 23 22:59:13.572989 systemd[1]: cri-containerd-501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658.scope: Consumed 3.169s CPU time, 27M memory peak, 3.2M read from disk. Nov 23 22:59:13.580093 containerd[1546]: time="2025-11-23T22:59:13.580048121Z" level=info msg="received container exit event container_id:\"501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658\" id:\"501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658\" pid:2619 exit_status:1 exited_at:{seconds:1763938753 nanos:578633398}" Nov 23 22:59:13.610598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658-rootfs.mount: Deactivated successfully. Nov 23 22:59:14.516222 kubelet[2763]: I1123 22:59:14.515463 2763 scope.go:117] "RemoveContainer" containerID="501ee4f69cac1d1029442340673b009a65da08942cef0b8e105cae084992f658" Nov 23 22:59:14.521485 containerd[1546]: time="2025-11-23T22:59:14.520773952Z" level=info msg="CreateContainer within sandbox \"6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 23 22:59:14.556131 containerd[1546]: time="2025-11-23T22:59:14.556078660Z" level=info msg="Container 6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:14.564094 systemd[1]: cri-containerd-8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7.scope: Deactivated successfully. Nov 23 22:59:14.564605 systemd[1]: cri-containerd-8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7.scope: Consumed 41.061s CPU time, 109.1M memory peak. Nov 23 22:59:14.569178 containerd[1546]: time="2025-11-23T22:59:14.569100645Z" level=info msg="received container exit event container_id:\"8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7\" id:\"8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7\" pid:3079 exit_status:1 exited_at:{seconds:1763938754 nanos:568615084}" Nov 23 22:59:14.570075 containerd[1546]: time="2025-11-23T22:59:14.569957247Z" level=info msg="CreateContainer within sandbox \"6fc8c9f0e5659642685546f150fb6f374c4eff3050ff9c17309df0aaf81b7a5a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980\"" Nov 23 22:59:14.570779 containerd[1546]: time="2025-11-23T22:59:14.570750728Z" level=info msg="StartContainer for \"6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980\"" Nov 23 22:59:14.573160 containerd[1546]: time="2025-11-23T22:59:14.572937333Z" level=info msg="connecting to shim 6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980" address="unix:///run/containerd/s/008376b6c1335a935f3ce29549add5d72af74886aa1e435b7ea88a47ed4dc656" protocol=ttrpc version=3 Nov 23 22:59:14.598718 systemd[1]: cri-containerd-2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df.scope: Deactivated successfully. Nov 23 22:59:14.599643 systemd[1]: cri-containerd-2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df.scope: Consumed 4.803s CPU time, 65.1M memory peak, 3M read from disk. Nov 23 22:59:14.609970 systemd[1]: Started cri-containerd-6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980.scope - libcontainer container 6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980. Nov 23 22:59:14.611013 containerd[1546]: time="2025-11-23T22:59:14.610114524Z" level=info msg="received container exit event container_id:\"2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df\" id:\"2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df\" pid:2611 exit_status:1 exited_at:{seconds:1763938754 nanos:609069562}" Nov 23 22:59:14.632179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7-rootfs.mount: Deactivated successfully. Nov 23 22:59:14.644017 kubelet[2763]: E1123 22:59:14.643976 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-78cb8dfc4-tz5zf" podUID="adda981c-9ce7-4e01-b56b-dc8bfccf049e" Nov 23 22:59:14.645189 containerd[1546]: time="2025-11-23T22:59:14.645154352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 23 22:59:14.680653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df-rootfs.mount: Deactivated successfully. Nov 23 22:59:14.698664 containerd[1546]: time="2025-11-23T22:59:14.698612056Z" level=info msg="StartContainer for \"6556a01a4b7a6570a7988a5717c32c9b34de3aa4bc6861e2259184d349e4a980\" returns successfully" Nov 23 22:59:14.993863 containerd[1546]: time="2025-11-23T22:59:14.993593747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:14.995183 containerd[1546]: time="2025-11-23T22:59:14.995123190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 23 22:59:14.995480 containerd[1546]: time="2025-11-23T22:59:14.995356630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:14.995853 kubelet[2763]: E1123 22:59:14.995743 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:14.995853 kubelet[2763]: E1123 22:59:14.995811 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 23 22:59:14.996098 kubelet[2763]: E1123 22:59:14.995948 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dn6zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c79f46457-wvsqw_calico-system(5efde8bf-2f30-47b7-ac7d-0827fb837ab3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:14.997194 kubelet[2763]: E1123 22:59:14.997128 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c79f46457-wvsqw" podUID="5efde8bf-2f30-47b7-ac7d-0827fb837ab3" Nov 23 22:59:15.519243 kubelet[2763]: I1123 22:59:15.518898 2763 scope.go:117] "RemoveContainer" containerID="8210de68b03aae1191752e75b61372126a791424d8ec1debe003ae5e3014aef7" Nov 23 22:59:15.522074 containerd[1546]: time="2025-11-23T22:59:15.521963079Z" level=info msg="CreateContainer within sandbox \"818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 23 22:59:15.529819 kubelet[2763]: I1123 22:59:15.529658 2763 scope.go:117] "RemoveContainer" containerID="2b042de060d6052c616100b9c2249a44763f4ecf915498aae354655c6ce2b0df" Nov 23 22:59:15.533552 containerd[1546]: time="2025-11-23T22:59:15.533469501Z" level=info msg="CreateContainer within sandbox \"c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 23 22:59:15.536682 containerd[1546]: time="2025-11-23T22:59:15.536368586Z" level=info msg="Container eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:15.547745 containerd[1546]: time="2025-11-23T22:59:15.546214245Z" level=info msg="Container 16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382: CDI devices from CRI Config.CDIDevices: []" Nov 23 22:59:15.555198 containerd[1546]: time="2025-11-23T22:59:15.555083142Z" level=info msg="CreateContainer within sandbox \"818f0f2a106bf86c5cb7e36f91e7ede3b00b588e45fac4c8213df4a96faee4ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040\"" Nov 23 22:59:15.556770 containerd[1546]: time="2025-11-23T22:59:15.556142384Z" level=info msg="StartContainer for \"eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040\"" Nov 23 22:59:15.557611 containerd[1546]: time="2025-11-23T22:59:15.557567027Z" level=info msg="connecting to shim eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040" address="unix:///run/containerd/s/7cec225c53bff478a7084912a4436140e05ce4396454f292afd43c8989891d95" protocol=ttrpc version=3 Nov 23 22:59:15.563860 containerd[1546]: time="2025-11-23T22:59:15.563816519Z" level=info msg="CreateContainer within sandbox \"c7b09296958a7c92384fc95455ffe4af8a131394845753d859693ffe9a01e8e3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382\"" Nov 23 22:59:15.566583 containerd[1546]: time="2025-11-23T22:59:15.566443604Z" level=info msg="StartContainer for \"16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382\"" Nov 23 22:59:15.571247 containerd[1546]: time="2025-11-23T22:59:15.571190853Z" level=info msg="connecting to shim 16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382" address="unix:///run/containerd/s/88591a1bc5b5a0e2dfe546f4fed0529ed42990add48627894fd508f58c638816" protocol=ttrpc version=3 Nov 23 22:59:15.597919 systemd[1]: Started cri-containerd-eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040.scope - libcontainer container eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040. Nov 23 22:59:15.608910 systemd[1]: Started cri-containerd-16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382.scope - libcontainer container 16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382. Nov 23 22:59:15.646119 containerd[1546]: time="2025-11-23T22:59:15.646084516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 23 22:59:15.676193 containerd[1546]: time="2025-11-23T22:59:15.676065694Z" level=info msg="StartContainer for \"16dfaaff43205ed1d1e451d5e92da6ca0447d3e11561aebe128fe7747fe5e382\" returns successfully" Nov 23 22:59:15.715919 containerd[1546]: time="2025-11-23T22:59:15.715645850Z" level=info msg="StartContainer for \"eb97651c739cea7760c4799b7620595943185775756a54a8a66a6b048cb19040\" returns successfully" Nov 23 22:59:15.981016 containerd[1546]: time="2025-11-23T22:59:15.980940237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:15.982987 containerd[1546]: time="2025-11-23T22:59:15.982857321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 23 22:59:15.982987 containerd[1546]: time="2025-11-23T22:59:15.982958881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 23 22:59:15.983839 kubelet[2763]: E1123 22:59:15.983784 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:15.984414 kubelet[2763]: E1123 22:59:15.984181 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 23 22:59:15.984414 kubelet[2763]: E1123 22:59:15.984355 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:0699b6371c9d444dac6521e58a9fef96,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:15.987784 containerd[1546]: time="2025-11-23T22:59:15.987489130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 23 22:59:16.323210 containerd[1546]: time="2025-11-23T22:59:16.322534485Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:16.324153 containerd[1546]: time="2025-11-23T22:59:16.323999767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 23 22:59:16.324153 containerd[1546]: time="2025-11-23T22:59:16.324042847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 23 22:59:16.324557 kubelet[2763]: E1123 22:59:16.324470 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:16.324698 kubelet[2763]: E1123 22:59:16.324672 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 23 22:59:16.325075 kubelet[2763]: E1123 22:59:16.324999 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qvfwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6db66bb6fb-5fmxw_calico-system(71fd8f09-1ec1-4a2b-a495-70eb0d66adad): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:16.326237 kubelet[2763]: E1123 22:59:16.326190 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6db66bb6fb-5fmxw" podUID="71fd8f09-1ec1-4a2b-a495-70eb0d66adad" Nov 23 22:59:16.645588 kubelet[2763]: E1123 22:59:16.645294 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zjft2" podUID="8226f51c-b67c-40ab-9e53-94d216a79ce7" Nov 23 22:59:17.645378 containerd[1546]: time="2025-11-23T22:59:17.645029015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 23 22:59:18.004050 containerd[1546]: time="2025-11-23T22:59:18.003862087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 23 22:59:18.005400 containerd[1546]: time="2025-11-23T22:59:18.004993609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 23 22:59:18.005632 containerd[1546]: time="2025-11-23T22:59:18.005179730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 23 22:59:18.006736 kubelet[2763]: E1123 22:59:18.005850 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:18.007326 kubelet[2763]: E1123 22:59:18.007091 2763 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 23 22:59:18.007326 kubelet[2763]: E1123 22:59:18.007247 2763 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h69hh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-ckjtj_calico-system(91280d56-7002-4fda-b0e5-b372b6025512): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 23 22:59:18.008498 kubelet[2763]: E1123 22:59:18.008460 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-ckjtj" podUID="91280d56-7002-4fda-b0e5-b372b6025512" Nov 23 22:59:18.229796 kubelet[2763]: E1123 22:59:18.229629 2763 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41968->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4459-1-2-5-0c65a92823.187ac4f20175acc5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4459-1-2-5-0c65a92823,UID:1d5156d6dac89e68660ee4679c4d3dfe,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4459-1-2-5-0c65a92823,},FirstTimestamp:2025-11-23 22:59:07.754589381 +0000 UTC m=+216.240707692,LastTimestamp:2025-11-23 22:59:07.754589381 +0000 UTC m=+216.240707692,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-2-5-0c65a92823,}"