Nov 6 00:22:03.956120 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 22:12:28 -00 2025 Nov 6 00:22:03.956148 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:03.956164 kernel: BIOS-provided physical RAM map: Nov 6 00:22:03.956175 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 00:22:03.956184 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 00:22:03.956194 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 00:22:03.956206 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 6 00:22:03.956217 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 6 00:22:03.956226 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 00:22:03.956238 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 00:22:03.956248 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 00:22:03.956258 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 00:22:03.956268 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 6 00:22:03.956278 kernel: NX (Execute Disable) protection: active Nov 6 00:22:03.956291 kernel: APIC: Static calls initialized Nov 6 00:22:03.956303 kernel: SMBIOS 3.0.0 present. Nov 6 00:22:03.956315 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 6 00:22:03.956326 kernel: DMI: Memory slots populated: 1/1 Nov 6 00:22:03.956337 kernel: Hypervisor detected: KVM Nov 6 00:22:03.956348 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 6 00:22:03.956359 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 00:22:03.956369 kernel: kvm-clock: using sched offset of 4657128950 cycles Nov 6 00:22:03.956381 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 00:22:03.956393 kernel: tsc: Detected 2495.310 MHz processor Nov 6 00:22:03.956404 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 00:22:03.956418 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 00:22:03.956429 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 6 00:22:03.956441 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 00:22:03.956457 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 00:22:03.956478 kernel: Using GB pages for direct mapping Nov 6 00:22:03.956500 kernel: ACPI: Early table checksum verification disabled Nov 6 00:22:03.956515 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 6 00:22:03.956531 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956547 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956567 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956582 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 6 00:22:03.956635 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956649 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956660 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956672 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 00:22:03.956688 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 6 00:22:03.956701 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 6 00:22:03.956713 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 6 00:22:03.956724 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 6 00:22:03.956736 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 6 00:22:03.956747 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 6 00:22:03.956759 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 6 00:22:03.956770 kernel: No NUMA configuration found Nov 6 00:22:03.956784 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 6 00:22:03.956795 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Nov 6 00:22:03.956820 kernel: Zone ranges: Nov 6 00:22:03.956832 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 00:22:03.956843 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 6 00:22:03.956855 kernel: Normal empty Nov 6 00:22:03.956866 kernel: Device empty Nov 6 00:22:03.956877 kernel: Movable zone start for each node Nov 6 00:22:03.956888 kernel: Early memory node ranges Nov 6 00:22:03.956900 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 00:22:03.956913 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 6 00:22:03.956925 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 6 00:22:03.956936 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 00:22:03.956947 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 00:22:03.956959 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 6 00:22:03.956970 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 00:22:03.956982 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 00:22:03.956993 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 00:22:03.957004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 00:22:03.957018 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 00:22:03.957029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 00:22:03.957040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 00:22:03.957052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 00:22:03.957063 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 00:22:03.957074 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 6 00:22:03.957086 kernel: CPU topo: Max. logical packages: 1 Nov 6 00:22:03.957097 kernel: CPU topo: Max. logical dies: 1 Nov 6 00:22:03.957108 kernel: CPU topo: Max. dies per package: 1 Nov 6 00:22:03.957121 kernel: CPU topo: Max. threads per core: 1 Nov 6 00:22:03.957133 kernel: CPU topo: Num. cores per package: 2 Nov 6 00:22:03.957144 kernel: CPU topo: Num. threads per package: 2 Nov 6 00:22:03.957155 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 6 00:22:03.957166 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 00:22:03.957178 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 00:22:03.957190 kernel: Booting paravirtualized kernel on KVM Nov 6 00:22:03.957201 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 00:22:03.957213 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 6 00:22:03.957227 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 6 00:22:03.957238 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 6 00:22:03.957250 kernel: pcpu-alloc: [0] 0 1 Nov 6 00:22:03.957261 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 6 00:22:03.957274 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:03.957286 kernel: random: crng init done Nov 6 00:22:03.957298 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 00:22:03.957310 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 00:22:03.957323 kernel: Fallback order for Node 0: 0 Nov 6 00:22:03.957334 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Nov 6 00:22:03.957345 kernel: Policy zone: DMA32 Nov 6 00:22:03.957357 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 00:22:03.957368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 6 00:22:03.957380 kernel: ftrace: allocating 40021 entries in 157 pages Nov 6 00:22:03.957391 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 00:22:03.957403 kernel: Dynamic Preempt: voluntary Nov 6 00:22:03.957414 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 00:22:03.957426 kernel: rcu: RCU event tracing is enabled. Nov 6 00:22:03.957440 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 6 00:22:03.957452 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 00:22:03.957463 kernel: Rude variant of Tasks RCU enabled. Nov 6 00:22:03.957475 kernel: Tracing variant of Tasks RCU enabled. Nov 6 00:22:03.957486 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 00:22:03.957498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 6 00:22:03.957510 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:03.957521 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:03.957533 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 6 00:22:03.957547 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 6 00:22:03.957558 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 00:22:03.957569 kernel: Console: colour VGA+ 80x25 Nov 6 00:22:03.957580 kernel: printk: legacy console [tty0] enabled Nov 6 00:22:03.957592 kernel: printk: legacy console [ttyS0] enabled Nov 6 00:22:03.957629 kernel: ACPI: Core revision 20240827 Nov 6 00:22:03.957648 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 6 00:22:03.957662 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 00:22:03.957674 kernel: x2apic enabled Nov 6 00:22:03.957686 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 00:22:03.957698 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 6 00:22:03.957710 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f7eb13dd7, max_idle_ns: 440795202126 ns Nov 6 00:22:03.957724 kernel: Calibrating delay loop (skipped) preset value.. 4990.62 BogoMIPS (lpj=2495310) Nov 6 00:22:03.957737 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 00:22:03.957749 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 6 00:22:03.957760 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 6 00:22:03.957774 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 00:22:03.957786 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 00:22:03.957798 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 00:22:03.957822 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 6 00:22:03.957833 kernel: active return thunk: retbleed_return_thunk Nov 6 00:22:03.957845 kernel: RETBleed: Mitigation: untrained return thunk Nov 6 00:22:03.957858 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 00:22:03.957870 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 00:22:03.957882 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 00:22:03.957896 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 00:22:03.957908 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 00:22:03.957919 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 00:22:03.957932 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 6 00:22:03.957944 kernel: Freeing SMP alternatives memory: 32K Nov 6 00:22:03.957955 kernel: pid_max: default: 32768 minimum: 301 Nov 6 00:22:03.957967 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 00:22:03.957979 kernel: landlock: Up and running. Nov 6 00:22:03.957991 kernel: SELinux: Initializing. Nov 6 00:22:03.958005 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:22:03.958017 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 00:22:03.958029 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 6 00:22:03.958041 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 6 00:22:03.958053 kernel: ... version: 0 Nov 6 00:22:03.958064 kernel: ... bit width: 48 Nov 6 00:22:03.958076 kernel: ... generic registers: 6 Nov 6 00:22:03.958088 kernel: ... value mask: 0000ffffffffffff Nov 6 00:22:03.958100 kernel: ... max period: 00007fffffffffff Nov 6 00:22:03.958113 kernel: ... fixed-purpose events: 0 Nov 6 00:22:03.958125 kernel: ... event mask: 000000000000003f Nov 6 00:22:03.958137 kernel: signal: max sigframe size: 1776 Nov 6 00:22:03.958149 kernel: rcu: Hierarchical SRCU implementation. Nov 6 00:22:03.958161 kernel: rcu: Max phase no-delay instances is 400. Nov 6 00:22:03.958173 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 6 00:22:03.958185 kernel: smp: Bringing up secondary CPUs ... Nov 6 00:22:03.958197 kernel: smpboot: x86: Booting SMP configuration: Nov 6 00:22:03.958209 kernel: .... node #0, CPUs: #1 Nov 6 00:22:03.958222 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 00:22:03.958234 kernel: smpboot: Total of 2 processors activated (9981.24 BogoMIPS) Nov 6 00:22:03.958247 kernel: Memory: 1911636K/2047464K available (14336K kernel code, 2436K rwdata, 26048K rodata, 45548K init, 1180K bss, 131284K reserved, 0K cma-reserved) Nov 6 00:22:03.958259 kernel: devtmpfs: initialized Nov 6 00:22:03.958271 kernel: x86/mm: Memory block size: 128MB Nov 6 00:22:03.958283 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 00:22:03.958296 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 6 00:22:03.958308 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 00:22:03.958320 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 00:22:03.958333 kernel: audit: initializing netlink subsys (disabled) Nov 6 00:22:03.958345 kernel: audit: type=2000 audit(1762388520.583:1): state=initialized audit_enabled=0 res=1 Nov 6 00:22:03.958357 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 00:22:03.958369 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 00:22:03.958381 kernel: cpuidle: using governor menu Nov 6 00:22:03.958393 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 00:22:03.958404 kernel: dca service started, version 1.12.1 Nov 6 00:22:03.958417 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 00:22:03.958428 kernel: PCI: Using configuration type 1 for base access Nov 6 00:22:03.958442 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 00:22:03.958454 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 00:22:03.958466 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 00:22:03.958478 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 00:22:03.958490 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 00:22:03.958502 kernel: ACPI: Added _OSI(Module Device) Nov 6 00:22:03.958514 kernel: ACPI: Added _OSI(Processor Device) Nov 6 00:22:03.958526 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 00:22:03.958537 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 00:22:03.958551 kernel: ACPI: Interpreter enabled Nov 6 00:22:03.958563 kernel: ACPI: PM: (supports S0 S5) Nov 6 00:22:03.958575 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 00:22:03.958587 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 00:22:03.958613 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 00:22:03.958625 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 00:22:03.958638 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 00:22:03.958826 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 00:22:03.958952 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 6 00:22:03.959063 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 6 00:22:03.959079 kernel: PCI host bridge to bus 0000:00 Nov 6 00:22:03.959190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 00:22:03.959298 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 00:22:03.959398 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 00:22:03.959497 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 6 00:22:03.959625 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 00:22:03.959731 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 6 00:22:03.959845 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 00:22:03.959980 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 00:22:03.960112 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 6 00:22:03.960230 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Nov 6 00:22:03.960350 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Nov 6 00:22:03.960464 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Nov 6 00:22:03.960577 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Nov 6 00:22:03.960719 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 00:22:03.960857 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.960973 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Nov 6 00:22:03.961088 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 6 00:22:03.961205 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 6 00:22:03.961317 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 6 00:22:03.961443 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.961557 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Nov 6 00:22:03.961703 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 6 00:22:03.961830 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 6 00:22:03.961945 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 00:22:03.962074 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.962190 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Nov 6 00:22:03.962303 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 6 00:22:03.962416 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 6 00:22:03.962529 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 00:22:03.962675 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.962791 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Nov 6 00:22:03.962925 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 6 00:22:03.963038 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 6 00:22:03.963150 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 00:22:03.963270 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.963383 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Nov 6 00:22:03.963495 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 6 00:22:03.963639 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 6 00:22:03.963760 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 00:22:03.963895 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.964010 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Nov 6 00:22:03.964121 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 6 00:22:03.964233 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 6 00:22:03.964348 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 00:22:03.964472 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.964591 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Nov 6 00:22:03.964730 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 6 00:22:03.964857 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 6 00:22:03.964970 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 00:22:03.965092 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.965206 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Nov 6 00:22:03.965320 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 6 00:22:03.965438 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 6 00:22:03.965551 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 00:22:03.965694 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 00:22:03.965825 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Nov 6 00:22:03.965940 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 6 00:22:03.966052 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 6 00:22:03.966165 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 00:22:03.966291 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 00:22:03.966407 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 00:22:03.966527 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 00:22:03.966663 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Nov 6 00:22:03.966777 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Nov 6 00:22:03.966916 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 00:22:03.967035 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 00:22:03.967161 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 6 00:22:03.967282 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Nov 6 00:22:03.967399 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Nov 6 00:22:03.967517 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Nov 6 00:22:03.967702 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 6 00:22:03.967859 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 6 00:22:03.967985 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Nov 6 00:22:03.968099 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 6 00:22:03.968226 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 6 00:22:03.968348 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Nov 6 00:22:03.968467 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 6 00:22:03.968581 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 6 00:22:03.968735 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 6 00:22:03.968874 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Nov 6 00:22:03.968990 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 6 00:22:03.969116 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 6 00:22:03.969235 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Nov 6 00:22:03.969352 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Nov 6 00:22:03.969465 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 6 00:22:03.969616 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 6 00:22:03.969743 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Nov 6 00:22:03.969874 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Nov 6 00:22:03.969990 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 6 00:22:03.970007 kernel: acpiphp: Slot [0] registered Nov 6 00:22:03.970130 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 6 00:22:03.970249 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Nov 6 00:22:03.970371 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Nov 6 00:22:03.970490 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Nov 6 00:22:03.970628 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 6 00:22:03.970647 kernel: acpiphp: Slot [0-2] registered Nov 6 00:22:03.970762 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 6 00:22:03.970779 kernel: acpiphp: Slot [0-3] registered Nov 6 00:22:03.970906 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 6 00:22:03.970924 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 00:22:03.970941 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 00:22:03.970953 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 00:22:03.970966 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 00:22:03.970978 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 00:22:03.970991 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 00:22:03.971003 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 00:22:03.971015 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 00:22:03.971028 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 00:22:03.971040 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 00:22:03.971054 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 00:22:03.971067 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 00:22:03.971079 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 00:22:03.971091 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 00:22:03.971103 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 00:22:03.971115 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 00:22:03.971127 kernel: iommu: Default domain type: Translated Nov 6 00:22:03.971140 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 00:22:03.971152 kernel: PCI: Using ACPI for IRQ routing Nov 6 00:22:03.971166 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 00:22:03.971178 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 00:22:03.971190 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 6 00:22:03.971307 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 00:22:03.971421 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 00:22:03.971534 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 00:22:03.971555 kernel: vgaarb: loaded Nov 6 00:22:03.971573 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 6 00:22:03.971594 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 6 00:22:03.971650 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 00:22:03.971663 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 00:22:03.971675 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 00:22:03.971688 kernel: pnp: PnP ACPI init Nov 6 00:22:03.971841 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 00:22:03.971860 kernel: pnp: PnP ACPI: found 5 devices Nov 6 00:22:03.971873 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 00:22:03.971885 kernel: NET: Registered PF_INET protocol family Nov 6 00:22:03.971902 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 00:22:03.971915 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 00:22:03.971928 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 00:22:03.971940 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 00:22:03.971952 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 00:22:03.971965 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 00:22:03.971977 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:22:03.971990 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 00:22:03.972002 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 00:22:03.972016 kernel: NET: Registered PF_XDP protocol family Nov 6 00:22:03.972134 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 6 00:22:03.972250 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 6 00:22:03.972365 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 6 00:22:03.972482 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Nov 6 00:22:03.972595 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Nov 6 00:22:03.972756 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Nov 6 00:22:03.972888 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 6 00:22:03.973006 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 6 00:22:03.973119 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 6 00:22:03.973261 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 6 00:22:03.973391 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 6 00:22:03.973506 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 00:22:03.973664 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 6 00:22:03.973781 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 6 00:22:03.973915 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 00:22:03.974029 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 6 00:22:03.974143 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 6 00:22:03.974264 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 00:22:03.974378 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 6 00:22:03.974490 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 6 00:22:03.974629 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 00:22:03.974747 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 6 00:22:03.974875 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 6 00:22:03.974995 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 00:22:03.975110 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 6 00:22:03.975224 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 6 00:22:03.975338 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 6 00:22:03.975457 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 00:22:03.975583 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 6 00:22:03.975757 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 6 00:22:03.975889 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 6 00:22:03.976004 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 00:22:03.976117 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 6 00:22:03.976231 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 6 00:22:03.976348 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 6 00:22:03.976462 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 00:22:03.976569 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 00:22:03.976700 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 00:22:03.976814 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 00:22:03.976917 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 6 00:22:03.977017 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 00:22:03.977124 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 6 00:22:03.977265 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 6 00:22:03.977408 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 6 00:22:03.977573 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 6 00:22:03.977730 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 00:22:03.977864 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 6 00:22:03.977972 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 00:22:03.978093 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 6 00:22:03.978238 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 00:22:03.978372 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 6 00:22:03.978478 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 00:22:03.978590 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 6 00:22:03.978728 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 00:22:03.978861 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 6 00:22:03.978968 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 6 00:22:03.979072 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 00:22:03.979192 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 6 00:22:03.979298 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 6 00:22:03.979401 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 00:22:03.979509 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 6 00:22:03.979702 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 6 00:22:03.979833 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 00:22:03.979852 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 00:22:03.979870 kernel: PCI: CLS 0 bytes, default 64 Nov 6 00:22:03.979883 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f7eb13dd7, max_idle_ns: 440795202126 ns Nov 6 00:22:03.979896 kernel: Initialise system trusted keyrings Nov 6 00:22:03.979909 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 00:22:03.979922 kernel: Key type asymmetric registered Nov 6 00:22:03.979935 kernel: Asymmetric key parser 'x509' registered Nov 6 00:22:03.979947 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 00:22:03.979960 kernel: io scheduler mq-deadline registered Nov 6 00:22:03.979975 kernel: io scheduler kyber registered Nov 6 00:22:03.979988 kernel: io scheduler bfq registered Nov 6 00:22:03.980103 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 6 00:22:03.980218 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 6 00:22:03.980332 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 6 00:22:03.980445 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 6 00:22:03.980558 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 6 00:22:03.980745 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 6 00:22:03.980882 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 6 00:22:03.981003 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 6 00:22:03.981115 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 6 00:22:03.981229 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 6 00:22:03.981342 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 6 00:22:03.981421 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 6 00:22:03.981486 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 6 00:22:03.981550 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 6 00:22:03.981638 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 6 00:22:03.981709 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 6 00:22:03.981719 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 00:22:03.981783 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 6 00:22:03.981860 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 6 00:22:03.981871 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 00:22:03.981878 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 6 00:22:03.981888 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 00:22:03.981895 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 00:22:03.981903 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 00:22:03.981910 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 00:22:03.981917 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 00:22:03.981984 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 00:22:03.981995 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 00:22:03.982054 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 00:22:03.982117 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T00:22:03 UTC (1762388523) Nov 6 00:22:03.982176 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 6 00:22:03.982186 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 6 00:22:03.982193 kernel: NET: Registered PF_INET6 protocol family Nov 6 00:22:03.982201 kernel: Segment Routing with IPv6 Nov 6 00:22:03.982208 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 00:22:03.982215 kernel: NET: Registered PF_PACKET protocol family Nov 6 00:22:03.982222 kernel: Key type dns_resolver registered Nov 6 00:22:03.982232 kernel: IPI shorthand broadcast: enabled Nov 6 00:22:03.982239 kernel: sched_clock: Marking stable (3706012456, 258376794)->(4003103189, -38713939) Nov 6 00:22:03.982246 kernel: registered taskstats version 1 Nov 6 00:22:03.982253 kernel: Loading compiled-in X.509 certificates Nov 6 00:22:03.982261 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: f906521ec29cbf079ae365554bad8eb8ed6ecb31' Nov 6 00:22:03.982268 kernel: Demotion targets for Node 0: null Nov 6 00:22:03.982276 kernel: Key type .fscrypt registered Nov 6 00:22:03.982283 kernel: Key type fscrypt-provisioning registered Nov 6 00:22:03.982290 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 00:22:03.982297 kernel: ima: Allocated hash algorithm: sha1 Nov 6 00:22:03.982305 kernel: ima: No architecture policies found Nov 6 00:22:03.982313 kernel: clk: Disabling unused clocks Nov 6 00:22:03.982320 kernel: Warning: unable to open an initial console. Nov 6 00:22:03.982328 kernel: Freeing unused kernel image (initmem) memory: 45548K Nov 6 00:22:03.982335 kernel: Write protecting the kernel read-only data: 40960k Nov 6 00:22:03.982342 kernel: Freeing unused kernel image (rodata/data gap) memory: 576K Nov 6 00:22:03.982349 kernel: Run /init as init process Nov 6 00:22:03.982356 kernel: with arguments: Nov 6 00:22:03.982364 kernel: /init Nov 6 00:22:03.982372 kernel: with environment: Nov 6 00:22:03.982379 kernel: HOME=/ Nov 6 00:22:03.982386 kernel: TERM=linux Nov 6 00:22:03.982394 systemd[1]: Successfully made /usr/ read-only. Nov 6 00:22:03.982405 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:03.982413 systemd[1]: Detected virtualization kvm. Nov 6 00:22:03.982421 systemd[1]: Detected architecture x86-64. Nov 6 00:22:03.982430 systemd[1]: Running in initrd. Nov 6 00:22:03.982439 systemd[1]: No hostname configured, using default hostname. Nov 6 00:22:03.982447 systemd[1]: Hostname set to . Nov 6 00:22:03.982455 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:22:03.982462 systemd[1]: Queued start job for default target initrd.target. Nov 6 00:22:03.982471 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:03.982479 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:03.982487 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 00:22:03.982497 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:03.982505 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 00:22:03.982514 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 00:22:03.982524 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 00:22:03.982532 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 00:22:03.982540 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:03.982549 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:03.982558 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:03.982566 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:03.982574 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:03.982582 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:03.982591 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:03.982613 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:03.982621 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 00:22:03.982629 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 00:22:03.982637 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:03.982646 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:03.982654 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:03.982662 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:03.982669 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 00:22:03.982677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:03.982685 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 00:22:03.982693 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 00:22:03.982701 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 00:22:03.982710 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:03.982717 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:03.982725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:03.982733 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:03.982763 systemd-journald[199]: Collecting audit messages is disabled. Nov 6 00:22:03.982785 systemd-journald[199]: Journal started Nov 6 00:22:03.982814 systemd-journald[199]: Runtime Journal (/run/log/journal/d5a7ecb951f14e78b35db268122a6aff) is 4.7M, max 38.3M, 33.5M free. Nov 6 00:22:03.941625 systemd-modules-load[200]: Inserted module 'overlay' Nov 6 00:22:04.050937 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 00:22:04.050958 kernel: Bridge firewalling registered Nov 6 00:22:04.050967 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:03.988042 systemd-modules-load[200]: Inserted module 'br_netfilter' Nov 6 00:22:04.052169 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:04.053767 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 00:22:04.055456 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:04.057139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:04.061303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 00:22:04.067108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:04.069908 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:22:04.076031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:04.091720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:04.094798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:04.094930 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 00:22:04.097159 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:04.098698 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:04.102702 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 00:22:04.107158 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:04.113789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:04.123746 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=59ca0b9e28689480cec05e5a7a50ffb2fd81e743a9e2986eb3bceb3b87f6702e Nov 6 00:22:04.129284 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:04.167815 systemd-resolved[234]: Positive Trust Anchors: Nov 6 00:22:04.168713 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:04.168757 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:04.174188 systemd-resolved[234]: Defaulting to hostname 'linux'. Nov 6 00:22:04.175020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:04.176062 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:04.187625 kernel: SCSI subsystem initialized Nov 6 00:22:04.196634 kernel: Loading iSCSI transport class v2.0-870. Nov 6 00:22:04.207643 kernel: iscsi: registered transport (tcp) Nov 6 00:22:04.240844 kernel: iscsi: registered transport (qla4xxx) Nov 6 00:22:04.240913 kernel: QLogic iSCSI HBA Driver Nov 6 00:22:04.255882 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:04.267740 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:04.271643 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:04.311418 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:04.314722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 00:22:04.370692 kernel: raid6: avx2x4 gen() 26063 MB/s Nov 6 00:22:04.388673 kernel: raid6: avx2x2 gen() 29213 MB/s Nov 6 00:22:04.407747 kernel: raid6: avx2x1 gen() 25205 MB/s Nov 6 00:22:04.407851 kernel: raid6: using algorithm avx2x2 gen() 29213 MB/s Nov 6 00:22:04.427794 kernel: raid6: .... xor() 20522 MB/s, rmw enabled Nov 6 00:22:04.427912 kernel: raid6: using avx2x2 recovery algorithm Nov 6 00:22:04.447668 kernel: xor: automatically using best checksumming function avx Nov 6 00:22:04.601658 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 00:22:04.608972 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:04.613350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:04.635491 systemd-udevd[447]: Using default interface naming scheme 'v255'. Nov 6 00:22:04.640011 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:04.646003 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 00:22:04.670320 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Nov 6 00:22:04.700735 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:04.704795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:04.766274 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:04.772034 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 00:22:04.860810 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 00:22:04.860872 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 6 00:22:04.870620 kernel: scsi host0: Virtio SCSI HBA Nov 6 00:22:04.902239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:04.902352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:04.905367 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:04.912110 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 6 00:22:04.907788 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:04.959641 kernel: ACPI: bus type USB registered Nov 6 00:22:04.959688 kernel: usbcore: registered new interface driver usbfs Nov 6 00:22:04.959698 kernel: usbcore: registered new interface driver hub Nov 6 00:22:04.959707 kernel: usbcore: registered new device driver usb Nov 6 00:22:04.959715 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 00:22:04.965620 kernel: AES CTR mode by8 optimization enabled Nov 6 00:22:04.965640 kernel: libata version 3.00 loaded. Nov 6 00:22:04.975630 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 6 00:22:04.975813 kernel: sd 0:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 6 00:22:04.975907 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 6 00:22:04.975989 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 6 00:22:04.976071 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 6 00:22:04.984624 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 00:22:04.984645 kernel: GPT:17805311 != 80003071 Nov 6 00:22:04.984654 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 00:22:04.984663 kernel: GPT:17805311 != 80003071 Nov 6 00:22:04.984672 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 00:22:04.984680 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:22:04.984690 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 6 00:22:05.004619 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 00:22:05.004838 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 6 00:22:05.004934 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 6 00:22:05.005017 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 00:22:05.005642 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 6 00:22:05.007674 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 6 00:22:05.007781 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 6 00:22:05.007905 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 6 00:22:05.009736 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 00:22:05.009866 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 00:22:05.009950 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 00:22:05.010688 kernel: hub 1-0:1.0: USB hub found Nov 6 00:22:05.011701 kernel: hub 1-0:1.0: 4 ports detected Nov 6 00:22:05.012647 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 6 00:22:05.013616 kernel: scsi host1: ahci Nov 6 00:22:05.013749 kernel: hub 2-0:1.0: USB hub found Nov 6 00:22:05.013856 kernel: hub 2-0:1.0: 4 ports detected Nov 6 00:22:05.013936 kernel: scsi host2: ahci Nov 6 00:22:05.018619 kernel: scsi host3: ahci Nov 6 00:22:05.019721 kernel: scsi host4: ahci Nov 6 00:22:05.019843 kernel: scsi host5: ahci Nov 6 00:22:05.019925 kernel: scsi host6: ahci Nov 6 00:22:05.020008 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 48 lpm-pol 1 Nov 6 00:22:05.020019 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 48 lpm-pol 1 Nov 6 00:22:05.020027 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 48 lpm-pol 1 Nov 6 00:22:05.020036 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 48 lpm-pol 1 Nov 6 00:22:05.020045 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 48 lpm-pol 1 Nov 6 00:22:05.020053 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 48 lpm-pol 1 Nov 6 00:22:05.061813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 6 00:22:05.152046 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:05.188001 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 6 00:22:05.202823 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 6 00:22:05.215212 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 6 00:22:05.216295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 6 00:22:05.221726 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 00:22:05.240789 disk-uuid[612]: Primary Header is updated. Nov 6 00:22:05.240789 disk-uuid[612]: Secondary Entries is updated. Nov 6 00:22:05.240789 disk-uuid[612]: Secondary Header is updated. Nov 6 00:22:05.255906 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 6 00:22:05.256003 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:22:05.332637 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:05.332710 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:05.332733 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:05.340630 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:05.340684 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 6 00:22:05.343631 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 00:22:05.347489 kernel: ata1.00: LPM support broken, forcing max_power Nov 6 00:22:05.347517 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 6 00:22:05.347532 kernel: ata1.00: applying bridge limits Nov 6 00:22:05.349671 kernel: ata1.00: LPM support broken, forcing max_power Nov 6 00:22:05.352466 kernel: ata1.00: configured for UDMA/100 Nov 6 00:22:05.357631 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 6 00:22:05.395761 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 00:22:05.401642 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 6 00:22:05.401817 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 6 00:22:05.409615 kernel: usbcore: registered new interface driver usbhid Nov 6 00:22:05.409642 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Nov 6 00:22:05.409753 kernel: usbhid: USB HID core driver Nov 6 00:22:05.427621 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Nov 6 00:22:05.435624 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 6 00:22:05.707847 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:05.712112 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:05.713676 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:05.717087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:05.721781 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 00:22:05.763754 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:06.282053 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 6 00:22:06.282132 disk-uuid[613]: The operation has completed successfully. Nov 6 00:22:06.359841 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 00:22:06.359969 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 00:22:06.425048 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 00:22:06.446307 sh[645]: Success Nov 6 00:22:06.480741 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 00:22:06.480830 kernel: device-mapper: uevent: version 1.0.3 Nov 6 00:22:06.484195 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 00:22:06.500629 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 6 00:22:06.559331 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 00:22:06.563729 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 00:22:06.578184 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 00:22:06.590851 kernel: BTRFS: device fsid 85d805c5-984c-4a6a-aaeb-49fff3689175 devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (657) Nov 6 00:22:06.594820 kernel: BTRFS info (device dm-0): first mount of filesystem 85d805c5-984c-4a6a-aaeb-49fff3689175 Nov 6 00:22:06.594864 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:06.611990 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 6 00:22:06.612073 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 00:22:06.616487 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 00:22:06.622274 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 00:22:06.623574 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:06.625260 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 00:22:06.626035 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 00:22:06.630719 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 00:22:06.678165 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (706) Nov 6 00:22:06.678238 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:06.682689 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:06.690548 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:22:06.690580 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:22:06.690594 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:22:06.698619 kernel: BTRFS info (device sda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:06.698965 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 00:22:06.702700 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 00:22:06.737591 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:06.743814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:06.798834 systemd-networkd[826]: lo: Link UP Nov 6 00:22:06.798841 systemd-networkd[826]: lo: Gained carrier Nov 6 00:22:06.800710 systemd-networkd[826]: Enumeration completed Nov 6 00:22:06.801055 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:06.801915 systemd-networkd[826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:06.801919 systemd-networkd[826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:06.802817 systemd-networkd[826]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:06.802820 systemd-networkd[826]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:06.803328 systemd[1]: Reached target network.target - Network. Nov 6 00:22:06.803727 systemd-networkd[826]: eth0: Link UP Nov 6 00:22:06.803845 systemd-networkd[826]: eth1: Link UP Nov 6 00:22:06.803976 systemd-networkd[826]: eth0: Gained carrier Nov 6 00:22:06.803983 systemd-networkd[826]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:06.809451 systemd-networkd[826]: eth1: Gained carrier Nov 6 00:22:06.809470 systemd-networkd[826]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:06.838754 ignition[794]: Ignition 2.22.0 Nov 6 00:22:06.838767 ignition[794]: Stage: fetch-offline Nov 6 00:22:06.838795 ignition[794]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:06.838813 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:06.840967 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:06.838911 ignition[794]: parsed url from cmdline: "" Nov 6 00:22:06.842928 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 00:22:06.838914 ignition[794]: no config URL provided Nov 6 00:22:06.838918 ignition[794]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:06.838924 ignition[794]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:06.838930 ignition[794]: failed to fetch config: resource requires networking Nov 6 00:22:06.839165 ignition[794]: Ignition finished successfully Nov 6 00:22:06.853685 systemd-networkd[826]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 6 00:22:06.867690 systemd-networkd[826]: eth0: DHCPv4 address 135.181.151.25/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 6 00:22:06.869019 ignition[836]: Ignition 2.22.0 Nov 6 00:22:06.869032 ignition[836]: Stage: fetch Nov 6 00:22:06.869147 ignition[836]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:06.869155 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:06.869229 ignition[836]: parsed url from cmdline: "" Nov 6 00:22:06.869232 ignition[836]: no config URL provided Nov 6 00:22:06.869236 ignition[836]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 00:22:06.869241 ignition[836]: no config at "/usr/lib/ignition/user.ign" Nov 6 00:22:06.869276 ignition[836]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 6 00:22:06.873330 ignition[836]: GET result: OK Nov 6 00:22:06.873401 ignition[836]: parsing config with SHA512: a62178e8b4f7358a6e373191d63f973780a1886cab43cee51d6348f5c9356dc2684b751fe57f34f5fa7f7c6043c799940ed0fd753a344b0017b64d8d6a1ae7c4 Nov 6 00:22:06.877726 unknown[836]: fetched base config from "system" Nov 6 00:22:06.878465 ignition[836]: fetch: fetch complete Nov 6 00:22:06.877750 unknown[836]: fetched base config from "system" Nov 6 00:22:06.878473 ignition[836]: fetch: fetch passed Nov 6 00:22:06.877760 unknown[836]: fetched user config from "hetzner" Nov 6 00:22:06.878529 ignition[836]: Ignition finished successfully Nov 6 00:22:06.880558 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 00:22:06.882175 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 00:22:06.908254 ignition[844]: Ignition 2.22.0 Nov 6 00:22:06.908269 ignition[844]: Stage: kargs Nov 6 00:22:06.908395 ignition[844]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:06.910834 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 00:22:06.908404 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:06.909407 ignition[844]: kargs: kargs passed Nov 6 00:22:06.909446 ignition[844]: Ignition finished successfully Nov 6 00:22:06.914699 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 00:22:06.958472 ignition[850]: Ignition 2.22.0 Nov 6 00:22:06.958493 ignition[850]: Stage: disks Nov 6 00:22:06.958709 ignition[850]: no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:06.958722 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:06.962348 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 00:22:06.959943 ignition[850]: disks: disks passed Nov 6 00:22:06.964638 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:06.959995 ignition[850]: Ignition finished successfully Nov 6 00:22:06.966420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 00:22:06.968006 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:06.970242 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:06.971827 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:06.975736 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 00:22:07.004010 systemd-fsck[859]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Nov 6 00:22:07.008247 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 00:22:07.011728 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 00:22:07.159637 kernel: EXT4-fs (sda9): mounted filesystem 25ee01aa-0270-4de7-b5da-d8936d968d16 r/w with ordered data mode. Quota mode: none. Nov 6 00:22:07.161450 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 00:22:07.163193 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:07.167103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:07.171991 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 00:22:07.182785 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 6 00:22:07.186644 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 00:22:07.188003 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:07.191933 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 00:22:07.205722 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (867) Nov 6 00:22:07.204870 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 00:22:07.236260 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:07.236298 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:07.236319 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:22:07.236339 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:22:07.236359 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:22:07.239828 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:07.281113 coreos-metadata[869]: Nov 06 00:22:07.281 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 6 00:22:07.283103 coreos-metadata[869]: Nov 06 00:22:07.283 INFO Fetch successful Nov 6 00:22:07.284653 coreos-metadata[869]: Nov 06 00:22:07.284 INFO wrote hostname ci-4459-1-0-n-bff22aa786 to /sysroot/etc/hostname Nov 6 00:22:07.287110 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:22:07.302843 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 00:22:07.307414 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Nov 6 00:22:07.312969 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 00:22:07.317397 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 00:22:07.440156 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:07.445001 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 00:22:07.450193 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 00:22:07.472669 kernel: BTRFS info (device sda6): last unmount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:07.497175 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 00:22:07.509580 ignition[985]: INFO : Ignition 2.22.0 Nov 6 00:22:07.509580 ignition[985]: INFO : Stage: mount Nov 6 00:22:07.511293 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:07.511293 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:07.511293 ignition[985]: INFO : mount: mount passed Nov 6 00:22:07.511293 ignition[985]: INFO : Ignition finished successfully Nov 6 00:22:07.511825 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 00:22:07.514044 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 00:22:07.589215 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 00:22:07.591977 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 00:22:07.624652 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (996) Nov 6 00:22:07.632743 kernel: BTRFS info (device sda6): first mount of filesystem ca2bb832-66d5-4dca-a6d2-cbf7440d9381 Nov 6 00:22:07.632838 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 00:22:07.647466 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 6 00:22:07.647520 kernel: BTRFS info (device sda6): turning on async discard Nov 6 00:22:07.650657 kernel: BTRFS info (device sda6): enabling free space tree Nov 6 00:22:07.658594 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 00:22:07.704032 ignition[1012]: INFO : Ignition 2.22.0 Nov 6 00:22:07.704032 ignition[1012]: INFO : Stage: files Nov 6 00:22:07.707291 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:07.707291 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:07.707291 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Nov 6 00:22:07.707291 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 00:22:07.707291 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 00:22:07.717526 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 00:22:07.717526 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 00:22:07.717526 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 00:22:07.714942 unknown[1012]: wrote ssh authorized keys file for user: core Nov 6 00:22:07.725419 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:07.725419 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 00:22:07.926713 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 00:22:08.227547 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 00:22:08.227547 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 00:22:08.233217 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:08.256186 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:08.256186 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:08.256186 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 00:22:08.406497 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 00:22:08.695057 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 00:22:08.696353 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 00:22:08.698817 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:08.703718 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 00:22:08.703718 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 00:22:08.703718 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 00:22:08.709130 ignition[1012]: INFO : files: files passed Nov 6 00:22:08.709130 ignition[1012]: INFO : Ignition finished successfully Nov 6 00:22:08.706108 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 00:22:08.710857 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 00:22:08.718788 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 00:22:08.727821 systemd-networkd[826]: eth0: Gained IPv6LL Nov 6 00:22:08.733303 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 00:22:08.734136 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 00:22:08.741508 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:08.741508 initrd-setup-root-after-ignition[1043]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:08.745041 initrd-setup-root-after-ignition[1047]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 00:22:08.747634 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:08.750700 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 00:22:08.753411 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 00:22:08.789154 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 00:22:08.789319 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 00:22:08.790864 systemd-networkd[826]: eth1: Gained IPv6LL Nov 6 00:22:08.793428 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 00:22:08.795667 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 00:22:08.797558 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 00:22:08.799851 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 00:22:08.844842 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:08.850908 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 00:22:08.880973 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:08.882884 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:08.885534 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 00:22:08.888129 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 00:22:08.888295 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 00:22:08.891192 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 00:22:08.892684 systemd[1]: Stopped target basic.target - Basic System. Nov 6 00:22:08.895223 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 00:22:08.897570 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 00:22:08.899917 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 00:22:08.902448 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 00:22:08.905184 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 00:22:08.907836 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 00:22:08.910814 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 00:22:08.913235 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 00:22:08.915828 systemd[1]: Stopped target swap.target - Swaps. Nov 6 00:22:08.918259 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 00:22:08.918462 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 00:22:08.921352 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:08.922895 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:08.925177 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 00:22:08.926771 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:08.927821 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 00:22:08.928022 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 00:22:08.931527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 00:22:08.931727 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 00:22:08.933128 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 00:22:08.933298 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 00:22:08.935272 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 6 00:22:08.935399 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 6 00:22:08.952699 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 00:22:08.966821 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 00:22:08.967831 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 00:22:08.968037 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:08.971791 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 00:22:08.971981 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 00:22:08.980147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 00:22:08.981726 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 00:22:08.991326 ignition[1067]: INFO : Ignition 2.22.0 Nov 6 00:22:08.991326 ignition[1067]: INFO : Stage: umount Nov 6 00:22:08.991326 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 00:22:08.991326 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 6 00:22:09.001167 ignition[1067]: INFO : umount: umount passed Nov 6 00:22:09.001167 ignition[1067]: INFO : Ignition finished successfully Nov 6 00:22:08.994058 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 00:22:08.994184 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 00:22:08.997877 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 00:22:08.997965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 00:22:09.001853 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 00:22:09.001908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 00:22:09.004795 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 00:22:09.004867 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 00:22:09.006475 systemd[1]: Stopped target network.target - Network. Nov 6 00:22:09.010348 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 00:22:09.010429 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 00:22:09.012676 systemd[1]: Stopped target paths.target - Path Units. Nov 6 00:22:09.014679 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 00:22:09.019699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:09.021376 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 00:22:09.023695 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 00:22:09.026076 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 00:22:09.026120 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 00:22:09.028087 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 00:22:09.028126 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 00:22:09.030355 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 00:22:09.030411 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 00:22:09.032890 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 00:22:09.032938 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 00:22:09.035078 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 00:22:09.037123 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 00:22:09.042426 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 00:22:09.043159 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 00:22:09.043263 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 00:22:09.048076 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 00:22:09.048309 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 00:22:09.048410 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 00:22:09.050989 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 00:22:09.051063 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 00:22:09.053195 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 00:22:09.053247 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:09.058946 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:09.059155 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 00:22:09.059262 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 00:22:09.062693 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 00:22:09.063115 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 00:22:09.065047 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 00:22:09.065085 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:09.067951 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 00:22:09.070190 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 00:22:09.070246 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 00:22:09.073269 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 00:22:09.073323 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:09.077109 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 00:22:09.077189 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:09.079526 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:09.086618 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 00:22:09.097998 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 00:22:09.100947 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:09.103454 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 00:22:09.103554 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 00:22:09.105512 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 00:22:09.105573 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:09.107332 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 00:22:09.107369 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:09.109086 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 00:22:09.109143 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 00:22:09.111907 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 00:22:09.111960 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 00:22:09.114232 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 00:22:09.114302 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 00:22:09.117516 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 00:22:09.120261 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 00:22:09.120324 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:09.123716 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 00:22:09.123768 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:09.126698 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 6 00:22:09.126754 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:09.128713 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 00:22:09.128764 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:09.130957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:09.131011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:09.138450 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 00:22:09.138547 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 00:22:09.146688 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 00:22:09.149448 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 00:22:09.168569 systemd[1]: Switching root. Nov 6 00:22:09.207018 systemd-journald[199]: Journal stopped Nov 6 00:22:10.365855 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Nov 6 00:22:10.365903 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 00:22:10.365914 kernel: SELinux: policy capability open_perms=1 Nov 6 00:22:10.365927 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 00:22:10.365937 kernel: SELinux: policy capability always_check_network=0 Nov 6 00:22:10.365945 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 00:22:10.365954 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 00:22:10.365963 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 00:22:10.365974 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 00:22:10.365982 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 00:22:10.365992 kernel: audit: type=1403 audit(1762388529.464:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 00:22:10.366009 systemd[1]: Successfully loaded SELinux policy in 89.271ms. Nov 6 00:22:10.366026 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.041ms. Nov 6 00:22:10.366037 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 00:22:10.366047 systemd[1]: Detected virtualization kvm. Nov 6 00:22:10.366057 systemd[1]: Detected architecture x86-64. Nov 6 00:22:10.366067 systemd[1]: Detected first boot. Nov 6 00:22:10.366077 systemd[1]: Hostname set to . Nov 6 00:22:10.366087 systemd[1]: Initializing machine ID from VM UUID. Nov 6 00:22:10.366096 zram_generator::config[1111]: No configuration found. Nov 6 00:22:10.366107 kernel: Guest personality initialized and is inactive Nov 6 00:22:10.366116 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 00:22:10.366125 kernel: Initialized host personality Nov 6 00:22:10.366134 kernel: NET: Registered PF_VSOCK protocol family Nov 6 00:22:10.366144 systemd[1]: Populated /etc with preset unit settings. Nov 6 00:22:10.366156 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 00:22:10.366165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 00:22:10.366175 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 00:22:10.366184 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:10.366194 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 00:22:10.366204 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 00:22:10.366213 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 00:22:10.366223 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 00:22:10.366234 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 00:22:10.366249 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 00:22:10.366260 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 00:22:10.366270 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 00:22:10.366280 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 00:22:10.366290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 00:22:10.366301 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 00:22:10.366311 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 00:22:10.366325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 00:22:10.366335 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 00:22:10.366345 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 00:22:10.366355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 00:22:10.366366 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 00:22:10.366377 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 00:22:10.366387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 00:22:10.366397 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 00:22:10.366407 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 00:22:10.366416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 00:22:10.366426 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 00:22:10.366436 systemd[1]: Reached target slices.target - Slice Units. Nov 6 00:22:10.366445 systemd[1]: Reached target swap.target - Swaps. Nov 6 00:22:10.366456 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 00:22:10.366466 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 00:22:10.366476 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 00:22:10.366486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 00:22:10.366496 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 00:22:10.366505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 00:22:10.366515 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 00:22:10.366524 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 00:22:10.366537 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 00:22:10.366548 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 00:22:10.366558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:10.366568 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 00:22:10.366578 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 00:22:10.366588 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 00:22:10.366611 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 00:22:10.366620 systemd[1]: Reached target machines.target - Containers. Nov 6 00:22:10.366630 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 00:22:10.366642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:10.366652 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 00:22:10.366662 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 00:22:10.366672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:10.366682 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:10.366692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:10.366701 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 00:22:10.366711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:10.366721 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 00:22:10.366733 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 00:22:10.366742 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 00:22:10.366752 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 00:22:10.366762 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 00:22:10.366772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:10.366782 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 00:22:10.366792 kernel: loop: module loaded Nov 6 00:22:10.366809 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 00:22:10.366818 kernel: fuse: init (API version 7.41) Nov 6 00:22:10.366831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 00:22:10.366841 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 00:22:10.366850 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 00:22:10.366861 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 00:22:10.366872 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 00:22:10.366883 kernel: ACPI: bus type drm_connector registered Nov 6 00:22:10.366892 systemd[1]: Stopped verity-setup.service. Nov 6 00:22:10.366904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:10.366914 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 00:22:10.366925 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 00:22:10.366948 systemd-journald[1202]: Collecting audit messages is disabled. Nov 6 00:22:10.366968 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 00:22:10.366979 systemd-journald[1202]: Journal started Nov 6 00:22:10.366999 systemd-journald[1202]: Runtime Journal (/run/log/journal/d5a7ecb951f14e78b35db268122a6aff) is 4.7M, max 38.3M, 33.5M free. Nov 6 00:22:10.010193 systemd[1]: Queued start job for default target multi-user.target. Nov 6 00:22:10.021355 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 6 00:22:10.021731 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 00:22:10.372826 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 00:22:10.373452 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 00:22:10.374262 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 00:22:10.375093 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 00:22:10.375974 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 00:22:10.376865 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 00:22:10.377746 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 00:22:10.377948 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 00:22:10.378844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:10.379019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:10.379962 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:10.380127 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:10.380956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:10.381130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:10.382030 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 00:22:10.382203 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 00:22:10.383034 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:10.383153 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:10.384145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 00:22:10.385021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 00:22:10.385998 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 00:22:10.386898 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 00:22:10.396297 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 00:22:10.398670 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 00:22:10.401751 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 00:22:10.403183 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 00:22:10.403207 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 00:22:10.405022 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 00:22:10.410523 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 00:22:10.411739 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:10.413662 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 00:22:10.415466 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 00:22:10.417028 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:10.417695 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 00:22:10.418419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:10.419293 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 00:22:10.424754 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 00:22:10.432644 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 00:22:10.435997 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 00:22:10.437544 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 00:22:10.445684 systemd-journald[1202]: Time spent on flushing to /var/log/journal/d5a7ecb951f14e78b35db268122a6aff is 24.993ms for 1166 entries. Nov 6 00:22:10.445684 systemd-journald[1202]: System Journal (/var/log/journal/d5a7ecb951f14e78b35db268122a6aff) is 8M, max 584.8M, 576.8M free. Nov 6 00:22:10.509252 systemd-journald[1202]: Received client request to flush runtime journal. Nov 6 00:22:10.509288 kernel: loop0: detected capacity change from 0 to 8 Nov 6 00:22:10.509303 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 00:22:10.449642 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 00:22:10.450860 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 00:22:10.455122 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 00:22:10.477333 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 00:22:10.491894 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 00:22:10.510560 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 00:22:10.515633 kernel: loop1: detected capacity change from 0 to 110984 Nov 6 00:22:10.521810 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 6 00:22:10.521826 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Nov 6 00:22:10.529103 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 00:22:10.532730 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 00:22:10.538191 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 00:22:10.558832 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 00:22:10.574292 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 00:22:10.576309 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 00:22:10.592631 kernel: loop3: detected capacity change from 0 to 128016 Nov 6 00:22:10.620368 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 6 00:22:10.621230 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Nov 6 00:22:10.632260 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 00:22:10.648281 kernel: loop4: detected capacity change from 0 to 8 Nov 6 00:22:10.653657 kernel: loop5: detected capacity change from 0 to 110984 Nov 6 00:22:10.677637 kernel: loop6: detected capacity change from 0 to 229808 Nov 6 00:22:10.705625 kernel: loop7: detected capacity change from 0 to 128016 Nov 6 00:22:10.733088 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Nov 6 00:22:10.733443 (sd-merge)[1264]: Merged extensions into '/usr'. Nov 6 00:22:10.737259 systemd[1]: Reload requested from client PID 1236 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 00:22:10.737328 systemd[1]: Reloading... Nov 6 00:22:10.816978 zram_generator::config[1286]: No configuration found. Nov 6 00:22:10.933629 ldconfig[1231]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 00:22:10.999543 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 00:22:10.999871 systemd[1]: Reloading finished in 262 ms. Nov 6 00:22:11.016992 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 00:22:11.018056 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 00:22:11.029700 systemd[1]: Starting ensure-sysext.service... Nov 6 00:22:11.032681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 00:22:11.051873 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Nov 6 00:22:11.051883 systemd[1]: Reloading... Nov 6 00:22:11.053570 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 00:22:11.053595 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 00:22:11.054160 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 00:22:11.054370 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 00:22:11.055001 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 00:22:11.055212 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Nov 6 00:22:11.055253 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Nov 6 00:22:11.061253 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:11.061332 systemd-tmpfiles[1334]: Skipping /boot Nov 6 00:22:11.071933 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 00:22:11.074630 systemd-tmpfiles[1334]: Skipping /boot Nov 6 00:22:11.092669 zram_generator::config[1358]: No configuration found. Nov 6 00:22:11.254096 systemd[1]: Reloading finished in 201 ms. Nov 6 00:22:11.262122 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 00:22:11.263213 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 00:22:11.271695 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:11.280400 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 00:22:11.284947 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 00:22:11.287850 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 00:22:11.293880 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 00:22:11.299925 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 00:22:11.327005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:11.328169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:11.335888 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:11.339423 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Nov 6 00:22:11.342729 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:11.348979 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:11.352267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:11.352523 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:11.352724 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:11.365782 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 00:22:11.372006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:11.372740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:11.375305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:11.375494 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:11.377080 augenrules[1435]: No rules Nov 6 00:22:11.378644 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:11.378889 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:11.381131 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:11.381535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:11.384663 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 00:22:11.387209 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 00:22:11.407984 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:11.409472 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:11.411395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:11.414148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:11.419055 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 00:22:11.425403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:11.433696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:11.435843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:11.435990 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:11.440856 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 00:22:11.447635 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 00:22:11.453932 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 00:22:11.455872 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:11.457333 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 00:22:11.469102 systemd[1]: Finished ensure-sysext.service. Nov 6 00:22:11.474739 augenrules[1467]: /sbin/augenrules: No change Nov 6 00:22:11.482211 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 00:22:11.488417 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:22:11.493723 augenrules[1495]: No rules Nov 6 00:22:11.493924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:11.504479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:11.505868 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:11.506063 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:11.507416 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 00:22:11.513435 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:11.514088 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:11.516140 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:11.517029 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:11.518865 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 00:22:11.519339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 00:22:11.523411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:11.523499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:11.573458 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 00:22:11.600697 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 00:22:11.709631 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 00:22:11.718238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 6 00:22:11.722743 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 00:22:11.752720 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Nov 6 00:22:11.768178 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 6 00:22:11.768234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:11.768319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 00:22:11.770931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 00:22:11.772518 systemd-networkd[1473]: lo: Link UP Nov 6 00:22:11.772526 systemd-networkd[1473]: lo: Gained carrier Nov 6 00:22:11.775865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 00:22:11.781265 systemd-networkd[1473]: Enumeration completed Nov 6 00:22:11.781816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 00:22:11.782547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 00:22:11.782583 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 00:22:11.782642 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 00:22:11.782656 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 00:22:11.782863 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 00:22:11.784059 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 00:22:11.789997 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:11.790001 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:11.792761 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 00:22:11.796504 systemd-networkd[1473]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:11.796513 systemd-networkd[1473]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 00:22:11.797158 systemd-networkd[1473]: eth0: Link UP Nov 6 00:22:11.797262 systemd-networkd[1473]: eth0: Gained carrier Nov 6 00:22:11.797275 systemd-networkd[1473]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:11.798746 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 00:22:11.800664 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 00:22:11.801635 systemd-networkd[1473]: eth1: Link UP Nov 6 00:22:11.801643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 00:22:11.803393 systemd-networkd[1473]: eth1: Gained carrier Nov 6 00:22:11.803418 systemd-networkd[1473]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 00:22:11.809839 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 00:22:11.810804 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 00:22:11.810976 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 00:22:11.812921 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 00:22:11.813760 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 00:22:11.819177 systemd-resolved[1410]: Positive Trust Anchors: Nov 6 00:22:11.819190 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 00:22:11.819218 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 00:22:11.819241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 00:22:11.819861 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 00:22:11.820837 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 00:22:11.831174 systemd-resolved[1410]: Using system hostname 'ci-4459-1-0-n-bff22aa786'. Nov 6 00:22:11.832863 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 00:22:11.837685 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 00:22:11.838442 systemd[1]: Reached target network.target - Network. Nov 6 00:22:11.839151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 00:22:11.839769 systemd-networkd[1473]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 6 00:22:11.840201 systemd-timesyncd[1493]: Network configuration changed, trying to establish connection. Nov 6 00:22:11.840424 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 00:22:11.841475 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 00:22:11.842739 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 00:22:11.844406 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 00:22:11.845278 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 00:22:11.846126 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 00:22:11.847187 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 00:22:11.848236 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 00:22:11.848323 systemd[1]: Reached target paths.target - Path Units. Nov 6 00:22:11.849283 systemd[1]: Reached target timers.target - Timer Units. Nov 6 00:22:11.853086 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 00:22:11.855715 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 00:22:11.856550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 00:22:11.858701 systemd-networkd[1473]: eth0: DHCPv4 address 135.181.151.25/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 6 00:22:11.859967 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 00:22:11.864657 kernel: ACPI: button: Power Button [PWRF] Nov 6 00:22:11.867043 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 00:22:11.868342 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 00:22:11.870684 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 00:22:11.879107 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 00:22:11.881177 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 00:22:11.883422 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 00:22:11.886357 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 00:22:11.887035 systemd[1]: Reached target basic.target - Basic System. Nov 6 00:22:11.889861 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:22:11.889884 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 00:22:11.893109 kernel: EDAC MC: Ver: 3.0.0 Nov 6 00:22:11.891319 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 00:22:11.895875 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 00:22:11.901812 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 00:22:11.905789 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 00:22:11.908667 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 00:22:11.911846 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 00:22:11.913577 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 00:22:11.918194 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 00:22:11.925859 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 00:22:11.928956 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 00:22:11.931235 jq[1563]: false Nov 6 00:22:11.932054 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 6 00:22:11.937069 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 00:22:11.945926 extend-filesystems[1564]: Found /dev/sda6 Nov 6 00:22:11.946006 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 00:22:11.951270 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 00:22:11.953531 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 00:22:11.954997 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 00:22:11.955728 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 00:22:11.959513 extend-filesystems[1564]: Found /dev/sda9 Nov 6 00:22:11.960339 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 00:22:11.971891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 00:22:11.973550 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Nov 6 00:22:11.975414 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Nov 6 00:22:11.974417 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 00:22:11.974593 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 00:22:11.985879 extend-filesystems[1564]: Checking size of /dev/sda9 Nov 6 00:22:12.792063 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Nov 6 00:22:12.792063 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:22:12.792063 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Nov 6 00:22:12.790302 systemd-resolved[1410]: Clock change detected. Flushing caches. Nov 6 00:22:11.991994 oslogin_cache_refresh[1566]: Failure getting users, quitting Nov 6 00:22:12.790406 systemd-timesyncd[1493]: Contacted time server 78.46.87.46:123 (0.flatcar.pool.ntp.org). Nov 6 00:22:11.992015 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 00:22:12.790507 systemd-timesyncd[1493]: Initial clock synchronization to Thu 2025-11-06 00:22:12.790091 UTC. Nov 6 00:22:11.992068 oslogin_cache_refresh[1566]: Refreshing group entry cache Nov 6 00:22:12.800117 jq[1579]: true Nov 6 00:22:12.802070 update_engine[1578]: I20251106 00:22:12.800603 1578 main.cc:92] Flatcar Update Engine starting Nov 6 00:22:12.804286 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Nov 6 00:22:12.804286 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:22:12.802516 oslogin_cache_refresh[1566]: Failure getting groups, quitting Nov 6 00:22:12.802528 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 00:22:12.805835 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 00:22:12.806019 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 00:22:12.808070 coreos-metadata[1560]: Nov 06 00:22:12.807 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 6 00:22:12.811511 coreos-metadata[1560]: Nov 06 00:22:12.811 INFO Fetch successful Nov 6 00:22:12.811511 coreos-metadata[1560]: Nov 06 00:22:12.811 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 6 00:22:12.812849 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 00:22:12.813152 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 00:22:12.815663 coreos-metadata[1560]: Nov 06 00:22:12.815 INFO Fetch successful Nov 6 00:22:12.820844 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 6 00:22:12.820951 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 6 00:22:12.827146 kernel: Console: switching to colour dummy device 80x25 Nov 6 00:22:12.829721 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 6 00:22:12.829778 kernel: [drm] features: -context_init Nov 6 00:22:12.829789 extend-filesystems[1564]: Resized partition /dev/sda9 Nov 6 00:22:12.829906 extend-filesystems[1610]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 00:22:12.833070 kernel: [drm] number of scanouts: 1 Nov 6 00:22:12.833124 kernel: [drm] number of cap sets: 0 Nov 6 00:22:12.837054 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 6 00:22:12.840059 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 6 00:22:12.846308 kernel: Console: switching to colour frame buffer device 160x50 Nov 6 00:22:12.849064 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 9393147 blocks Nov 6 00:22:12.851874 tar[1584]: linux-amd64/LICENSE Nov 6 00:22:12.851874 tar[1584]: linux-amd64/helm Nov 6 00:22:12.864468 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 6 00:22:12.851574 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 00:22:12.864754 update_engine[1578]: I20251106 00:22:12.861203 1578 update_check_scheduler.cc:74] Next update check in 9m28s Nov 6 00:22:12.854343 dbus-daemon[1561]: [system] SELinux support is enabled Nov 6 00:22:12.852302 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 00:22:12.865104 jq[1604]: true Nov 6 00:22:12.865314 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 00:22:12.874391 (ntainerd)[1607]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 00:22:12.885637 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 00:22:12.885782 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 00:22:12.887108 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 00:22:12.887266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 00:22:12.891493 systemd[1]: Started update-engine.service - Update Engine. Nov 6 00:22:12.895445 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 00:22:12.899603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:13.008990 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 00:22:13.009441 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 00:22:13.030593 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:22:13.032921 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 00:22:13.035192 systemd[1]: Starting sshkeys.service... Nov 6 00:22:13.062738 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:13.063519 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:13.076446 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:13.080337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:13.091732 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 00:22:13.095106 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 00:22:13.121687 kernel: EXT4-fs (sda9): resized filesystem to 9393147 Nov 6 00:22:13.150055 extend-filesystems[1610]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 6 00:22:13.150055 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 6 00:22:13.150055 extend-filesystems[1610]: The filesystem on /dev/sda9 is now 9393147 (4k) blocks long. Nov 6 00:22:13.154119 extend-filesystems[1564]: Resized filesystem in /dev/sda9 Nov 6 00:22:13.150349 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 00:22:13.150532 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 00:22:13.203245 coreos-metadata[1654]: Nov 06 00:22:13.203 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 6 00:22:13.206128 coreos-metadata[1654]: Nov 06 00:22:13.206 INFO Fetch successful Nov 6 00:22:13.213121 unknown[1654]: wrote ssh authorized keys file for user: core Nov 6 00:22:13.214276 systemd-logind[1577]: New seat seat0. Nov 6 00:22:13.217493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 00:22:13.218480 systemd-logind[1577]: Watching system buttons on /dev/input/event3 (Power Button) Nov 6 00:22:13.218495 systemd-logind[1577]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 00:22:13.219019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:13.220602 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 00:22:13.220886 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 00:22:13.233066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 00:22:13.268420 update-ssh-keys[1672]: Updated "/home/core/.ssh/authorized_keys" Nov 6 00:22:13.268924 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 00:22:13.287837 systemd[1]: Finished sshkeys.service. Nov 6 00:22:13.337183 containerd[1607]: time="2025-11-06T00:22:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 00:22:13.340426 containerd[1607]: time="2025-11-06T00:22:13.339210714Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 6 00:22:13.341441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 00:22:13.351197 locksmithd[1619]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 00:22:13.368262 containerd[1607]: time="2025-11-06T00:22:13.368201961Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.54µs" Nov 6 00:22:13.368262 containerd[1607]: time="2025-11-06T00:22:13.368251864Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 00:22:13.368262 containerd[1607]: time="2025-11-06T00:22:13.368270669Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 00:22:13.368448 containerd[1607]: time="2025-11-06T00:22:13.368426231Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 00:22:13.368448 containerd[1607]: time="2025-11-06T00:22:13.368447270Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 00:22:13.368502 containerd[1607]: time="2025-11-06T00:22:13.368469582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368553 containerd[1607]: time="2025-11-06T00:22:13.368518323Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368553 containerd[1607]: time="2025-11-06T00:22:13.368542509Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368769 containerd[1607]: time="2025-11-06T00:22:13.368745440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368769 containerd[1607]: time="2025-11-06T00:22:13.368764365Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368817 containerd[1607]: time="2025-11-06T00:22:13.368773693Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368817 containerd[1607]: time="2025-11-06T00:22:13.368781027Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 00:22:13.368858 containerd[1607]: time="2025-11-06T00:22:13.368833675Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 00:22:13.369002 containerd[1607]: time="2025-11-06T00:22:13.368980481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:22:13.369052 containerd[1607]: time="2025-11-06T00:22:13.369009154Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 00:22:13.369052 containerd[1607]: time="2025-11-06T00:22:13.369017771Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 00:22:13.374082 containerd[1607]: time="2025-11-06T00:22:13.374058475Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 00:22:13.374359 containerd[1607]: time="2025-11-06T00:22:13.374266104Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 00:22:13.374359 containerd[1607]: time="2025-11-06T00:22:13.374317801Z" level=info msg="metadata content store policy set" policy=shared Nov 6 00:22:13.378234 containerd[1607]: time="2025-11-06T00:22:13.378190235Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378249125Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378262981Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378273591Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378283930Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378292226Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378305290Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378315679Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378324416Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378336539Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378344233Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 00:22:13.378406 containerd[1607]: time="2025-11-06T00:22:13.378354943Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378438049Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378454250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378465791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378478756Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378488694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378498202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378507559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378516216Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378525413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378534400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 00:22:13.378589 containerd[1607]: time="2025-11-06T00:22:13.378542976Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 00:22:13.378784 containerd[1607]: time="2025-11-06T00:22:13.378593180Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 00:22:13.378784 containerd[1607]: time="2025-11-06T00:22:13.378607657Z" level=info msg="Start snapshots syncer" Nov 6 00:22:13.378784 containerd[1607]: time="2025-11-06T00:22:13.378628847Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 00:22:13.379408 containerd[1607]: time="2025-11-06T00:22:13.378818583Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 00:22:13.379408 containerd[1607]: time="2025-11-06T00:22:13.378859439Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.378902961Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.378966851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.378983202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.378991968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379001345Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379011284Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379019520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379048574Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379066327Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379075244Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379083620Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379099469Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379109228Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 00:22:13.379742 containerd[1607]: time="2025-11-06T00:22:13.379116472Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379124447Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379130759Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379139735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379155655Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379168269Z" level=info msg="runtime interface created" Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379172386Z" level=info msg="created NRI interface" Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379178959Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379187525Z" level=info msg="Connect containerd service" Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379210869Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 00:22:13.379972 containerd[1607]: time="2025-11-06T00:22:13.379747204Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 00:22:13.417370 sshd_keygen[1612]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 00:22:13.452654 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 00:22:13.458276 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 00:22:13.475302 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 00:22:13.475487 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 00:22:13.479491 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 00:22:13.502261 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 00:22:13.511140 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 00:22:13.517205 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 00:22:13.521758 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 00:22:13.523781 containerd[1607]: time="2025-11-06T00:22:13.523751389Z" level=info msg="Start subscribing containerd event" Nov 6 00:22:13.523936 containerd[1607]: time="2025-11-06T00:22:13.523841608Z" level=info msg="Start recovering state" Nov 6 00:22:13.524137 containerd[1607]: time="2025-11-06T00:22:13.524125500Z" level=info msg="Start event monitor" Nov 6 00:22:13.524380 containerd[1607]: time="2025-11-06T00:22:13.524307001Z" level=info msg="Start cni network conf syncer for default" Nov 6 00:22:13.524380 containerd[1607]: time="2025-11-06T00:22:13.524319364Z" level=info msg="Start streaming server" Nov 6 00:22:13.524380 containerd[1607]: time="2025-11-06T00:22:13.524331397Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 00:22:13.524380 containerd[1607]: time="2025-11-06T00:22:13.524338339Z" level=info msg="runtime interface starting up..." Nov 6 00:22:13.524380 containerd[1607]: time="2025-11-06T00:22:13.524343730Z" level=info msg="starting plugins..." Nov 6 00:22:13.524380 containerd[1607]: time="2025-11-06T00:22:13.524357947Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 00:22:13.524760 containerd[1607]: time="2025-11-06T00:22:13.524714625Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 00:22:13.525123 containerd[1607]: time="2025-11-06T00:22:13.524971627Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 00:22:13.526073 containerd[1607]: time="2025-11-06T00:22:13.525331041Z" level=info msg="containerd successfully booted in 0.188955s" Nov 6 00:22:13.525434 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 00:22:13.582793 tar[1584]: linux-amd64/README.md Nov 6 00:22:13.595801 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 00:22:13.939325 systemd-networkd[1473]: eth0: Gained IPv6LL Nov 6 00:22:13.942806 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 00:22:13.944698 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 00:22:13.950993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:13.955392 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 00:22:13.994665 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 00:22:14.580397 systemd-networkd[1473]: eth1: Gained IPv6LL Nov 6 00:22:15.304919 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:15.316193 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 00:22:15.319294 systemd[1]: Startup finished in 3.832s (kernel) + 5.708s (initrd) + 5.146s (userspace) = 14.687s. Nov 6 00:22:15.321722 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:16.117409 kubelet[1732]: E1106 00:22:16.117307 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:16.121529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:16.121711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:16.122112 systemd[1]: kubelet.service: Consumed 1.495s CPU time, 266.5M memory peak. Nov 6 00:22:19.281014 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 00:22:19.283606 systemd[1]: Started sshd@0-135.181.151.25:22-139.178.68.195:36938.service - OpenSSH per-connection server daemon (139.178.68.195:36938). Nov 6 00:22:20.334101 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 36938 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:20.336188 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:20.349368 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 00:22:20.351762 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 00:22:20.366121 systemd-logind[1577]: New session 1 of user core. Nov 6 00:22:20.382027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 00:22:20.387520 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 00:22:20.403836 (systemd)[1749]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 00:22:20.408833 systemd-logind[1577]: New session c1 of user core. Nov 6 00:22:20.618642 systemd[1749]: Queued start job for default target default.target. Nov 6 00:22:20.625006 systemd[1749]: Created slice app.slice - User Application Slice. Nov 6 00:22:20.625057 systemd[1749]: Reached target paths.target - Paths. Nov 6 00:22:20.625094 systemd[1749]: Reached target timers.target - Timers. Nov 6 00:22:20.626165 systemd[1749]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 00:22:20.657844 systemd[1749]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 00:22:20.657985 systemd[1749]: Reached target sockets.target - Sockets. Nov 6 00:22:20.658183 systemd[1749]: Reached target basic.target - Basic System. Nov 6 00:22:20.658264 systemd[1749]: Reached target default.target - Main User Target. Nov 6 00:22:20.658295 systemd[1749]: Startup finished in 238ms. Nov 6 00:22:20.658385 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 00:22:20.674243 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 00:22:21.425367 systemd[1]: Started sshd@1-135.181.151.25:22-139.178.68.195:36946.service - OpenSSH per-connection server daemon (139.178.68.195:36946). Nov 6 00:22:22.560102 sshd[1760]: Accepted publickey for core from 139.178.68.195 port 36946 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:22.562668 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:22.572157 systemd-logind[1577]: New session 2 of user core. Nov 6 00:22:22.575308 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 00:22:23.321461 sshd[1763]: Connection closed by 139.178.68.195 port 36946 Nov 6 00:22:23.322367 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:23.327842 systemd[1]: sshd@1-135.181.151.25:22-139.178.68.195:36946.service: Deactivated successfully. Nov 6 00:22:23.330714 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 00:22:23.331972 systemd-logind[1577]: Session 2 logged out. Waiting for processes to exit. Nov 6 00:22:23.333970 systemd-logind[1577]: Removed session 2. Nov 6 00:22:23.526946 systemd[1]: Started sshd@2-135.181.151.25:22-139.178.68.195:34512.service - OpenSSH per-connection server daemon (139.178.68.195:34512). Nov 6 00:22:24.661710 sshd[1769]: Accepted publickey for core from 139.178.68.195 port 34512 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:24.663628 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:24.671107 systemd-logind[1577]: New session 3 of user core. Nov 6 00:22:24.683269 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 00:22:25.419564 sshd[1772]: Connection closed by 139.178.68.195 port 34512 Nov 6 00:22:25.420594 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:25.426416 systemd[1]: sshd@2-135.181.151.25:22-139.178.68.195:34512.service: Deactivated successfully. Nov 6 00:22:25.429821 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 00:22:25.431573 systemd-logind[1577]: Session 3 logged out. Waiting for processes to exit. Nov 6 00:22:25.434012 systemd-logind[1577]: Removed session 3. Nov 6 00:22:25.583269 systemd[1]: Started sshd@3-135.181.151.25:22-139.178.68.195:34528.service - OpenSSH per-connection server daemon (139.178.68.195:34528). Nov 6 00:22:26.282918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 00:22:26.286289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:26.479233 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:26.491362 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:26.561716 kubelet[1789]: E1106 00:22:26.561520 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:26.567465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:26.567863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:26.568630 systemd[1]: kubelet.service: Consumed 231ms CPU time, 108.3M memory peak. Nov 6 00:22:26.617827 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 34528 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:26.620169 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:26.627838 systemd-logind[1577]: New session 4 of user core. Nov 6 00:22:26.639337 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 00:22:27.310610 sshd[1795]: Connection closed by 139.178.68.195 port 34528 Nov 6 00:22:27.311470 sshd-session[1778]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:27.317696 systemd-logind[1577]: Session 4 logged out. Waiting for processes to exit. Nov 6 00:22:27.318604 systemd[1]: sshd@3-135.181.151.25:22-139.178.68.195:34528.service: Deactivated successfully. Nov 6 00:22:27.321698 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 00:22:27.324558 systemd-logind[1577]: Removed session 4. Nov 6 00:22:27.485887 systemd[1]: Started sshd@4-135.181.151.25:22-139.178.68.195:34540.service - OpenSSH per-connection server daemon (139.178.68.195:34540). Nov 6 00:22:28.514191 sshd[1801]: Accepted publickey for core from 139.178.68.195 port 34540 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:28.516304 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:28.524454 systemd-logind[1577]: New session 5 of user core. Nov 6 00:22:28.533331 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 00:22:29.061895 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 00:22:29.062553 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:29.079961 sudo[1805]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:29.242543 sshd[1804]: Connection closed by 139.178.68.195 port 34540 Nov 6 00:22:29.243558 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:29.247915 systemd-logind[1577]: Session 5 logged out. Waiting for processes to exit. Nov 6 00:22:29.248555 systemd[1]: sshd@4-135.181.151.25:22-139.178.68.195:34540.service: Deactivated successfully. Nov 6 00:22:29.249973 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 00:22:29.251801 systemd-logind[1577]: Removed session 5. Nov 6 00:22:29.463271 systemd[1]: Started sshd@5-135.181.151.25:22-139.178.68.195:34552.service - OpenSSH per-connection server daemon (139.178.68.195:34552). Nov 6 00:22:30.607800 sshd[1811]: Accepted publickey for core from 139.178.68.195 port 34552 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:30.609846 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:30.617723 systemd-logind[1577]: New session 6 of user core. Nov 6 00:22:30.626276 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 00:22:31.198152 sudo[1816]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 00:22:31.198427 sudo[1816]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:31.205123 sudo[1816]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:31.210956 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 00:22:31.211274 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:31.225628 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 00:22:31.277504 augenrules[1838]: No rules Nov 6 00:22:31.279185 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 00:22:31.279586 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 00:22:31.281378 sudo[1815]: pam_unix(sudo:session): session closed for user root Nov 6 00:22:31.463150 sshd[1814]: Connection closed by 139.178.68.195 port 34552 Nov 6 00:22:31.464062 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Nov 6 00:22:31.468523 systemd[1]: sshd@5-135.181.151.25:22-139.178.68.195:34552.service: Deactivated successfully. Nov 6 00:22:31.470650 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 00:22:31.471774 systemd-logind[1577]: Session 6 logged out. Waiting for processes to exit. Nov 6 00:22:31.473692 systemd-logind[1577]: Removed session 6. Nov 6 00:22:31.669740 systemd[1]: Started sshd@6-135.181.151.25:22-139.178.68.195:34556.service - OpenSSH per-connection server daemon (139.178.68.195:34556). Nov 6 00:22:32.787330 sshd[1847]: Accepted publickey for core from 139.178.68.195 port 34556 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:22:32.789076 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:22:32.794615 systemd-logind[1577]: New session 7 of user core. Nov 6 00:22:32.805375 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 00:22:33.371128 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 00:22:33.371419 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 00:22:33.904645 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 00:22:33.927534 (dockerd)[1869]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 00:22:34.340070 dockerd[1869]: time="2025-11-06T00:22:34.339078744Z" level=info msg="Starting up" Nov 6 00:22:34.344866 dockerd[1869]: time="2025-11-06T00:22:34.344803292Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 00:22:34.359684 dockerd[1869]: time="2025-11-06T00:22:34.359574506Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 00:22:34.428176 dockerd[1869]: time="2025-11-06T00:22:34.428123477Z" level=info msg="Loading containers: start." Nov 6 00:22:34.443095 kernel: Initializing XFRM netlink socket Nov 6 00:22:34.771233 systemd-networkd[1473]: docker0: Link UP Nov 6 00:22:34.777455 dockerd[1869]: time="2025-11-06T00:22:34.777400761Z" level=info msg="Loading containers: done." Nov 6 00:22:34.799700 dockerd[1869]: time="2025-11-06T00:22:34.799633271Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 00:22:34.799909 dockerd[1869]: time="2025-11-06T00:22:34.799744039Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 00:22:34.799909 dockerd[1869]: time="2025-11-06T00:22:34.799825301Z" level=info msg="Initializing buildkit" Nov 6 00:22:34.800596 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2528431647-merged.mount: Deactivated successfully. Nov 6 00:22:34.831755 dockerd[1869]: time="2025-11-06T00:22:34.831701237Z" level=info msg="Completed buildkit initialization" Nov 6 00:22:34.843567 dockerd[1869]: time="2025-11-06T00:22:34.843463889Z" level=info msg="Daemon has completed initialization" Nov 6 00:22:34.843567 dockerd[1869]: time="2025-11-06T00:22:34.843542106Z" level=info msg="API listen on /run/docker.sock" Nov 6 00:22:34.844155 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 00:22:36.429866 containerd[1607]: time="2025-11-06T00:22:36.429809529Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 00:22:36.783566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 00:22:36.788026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:36.969188 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:36.985358 (kubelet)[2088]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:36.994704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990046201.mount: Deactivated successfully. Nov 6 00:22:37.055416 kubelet[2088]: E1106 00:22:37.055281 2088 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:37.058251 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:37.058363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:37.059265 systemd[1]: kubelet.service: Consumed 207ms CPU time, 108.2M memory peak. Nov 6 00:22:38.017417 containerd[1607]: time="2025-11-06T00:22:38.017356252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:38.018669 containerd[1607]: time="2025-11-06T00:22:38.018479218Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114993" Nov 6 00:22:38.019772 containerd[1607]: time="2025-11-06T00:22:38.019747166Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:38.022460 containerd[1607]: time="2025-11-06T00:22:38.022429367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:38.023179 containerd[1607]: time="2025-11-06T00:22:38.023161099Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.593302258s" Nov 6 00:22:38.023270 containerd[1607]: time="2025-11-06T00:22:38.023257680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 00:22:38.023966 containerd[1607]: time="2025-11-06T00:22:38.023951010Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 00:22:39.259108 containerd[1607]: time="2025-11-06T00:22:39.259020787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:39.260714 containerd[1607]: time="2025-11-06T00:22:39.260416235Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020866" Nov 6 00:22:39.261880 containerd[1607]: time="2025-11-06T00:22:39.261853660Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:39.265374 containerd[1607]: time="2025-11-06T00:22:39.265335751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:39.265981 containerd[1607]: time="2025-11-06T00:22:39.265949242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.241920014s" Nov 6 00:22:39.265981 containerd[1607]: time="2025-11-06T00:22:39.265979249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 00:22:39.266529 containerd[1607]: time="2025-11-06T00:22:39.266483174Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 00:22:40.365553 containerd[1607]: time="2025-11-06T00:22:40.365447551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:40.368002 containerd[1607]: time="2025-11-06T00:22:40.367785196Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155590" Nov 6 00:22:40.371274 containerd[1607]: time="2025-11-06T00:22:40.371198367Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:40.377752 containerd[1607]: time="2025-11-06T00:22:40.376911092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:40.380421 containerd[1607]: time="2025-11-06T00:22:40.380335275Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.113818727s" Nov 6 00:22:40.380421 containerd[1607]: time="2025-11-06T00:22:40.380396690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 00:22:40.381062 containerd[1607]: time="2025-11-06T00:22:40.380986516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 00:22:41.435411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019308805.mount: Deactivated successfully. Nov 6 00:22:41.826694 containerd[1607]: time="2025-11-06T00:22:41.826640024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:41.827818 containerd[1607]: time="2025-11-06T00:22:41.827788367Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929497" Nov 6 00:22:41.829271 containerd[1607]: time="2025-11-06T00:22:41.829221054Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:41.831418 containerd[1607]: time="2025-11-06T00:22:41.831370646Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:41.832263 containerd[1607]: time="2025-11-06T00:22:41.832228305Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.451152952s" Nov 6 00:22:41.832300 containerd[1607]: time="2025-11-06T00:22:41.832264633Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 00:22:41.832990 containerd[1607]: time="2025-11-06T00:22:41.832961360Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 00:22:42.265940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95785275.mount: Deactivated successfully. Nov 6 00:22:43.195470 containerd[1607]: time="2025-11-06T00:22:43.195405630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:43.196632 containerd[1607]: time="2025-11-06T00:22:43.196418049Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Nov 6 00:22:43.197749 containerd[1607]: time="2025-11-06T00:22:43.197716615Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:43.200484 containerd[1607]: time="2025-11-06T00:22:43.200406059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:43.201286 containerd[1607]: time="2025-11-06T00:22:43.201251755Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.368256722s" Nov 6 00:22:43.201330 containerd[1607]: time="2025-11-06T00:22:43.201291369Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 00:22:43.201643 containerd[1607]: time="2025-11-06T00:22:43.201623383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 00:22:43.661477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3745050444.mount: Deactivated successfully. Nov 6 00:22:43.670734 containerd[1607]: time="2025-11-06T00:22:43.670647574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:22:43.672112 containerd[1607]: time="2025-11-06T00:22:43.671851041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 6 00:22:43.673708 containerd[1607]: time="2025-11-06T00:22:43.673630889Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:22:43.679098 containerd[1607]: time="2025-11-06T00:22:43.677802544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 00:22:43.679098 containerd[1607]: time="2025-11-06T00:22:43.678850219Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 477.197953ms" Nov 6 00:22:43.679098 containerd[1607]: time="2025-11-06T00:22:43.678889603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 00:22:43.679864 containerd[1607]: time="2025-11-06T00:22:43.679796433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 00:22:44.125431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521398012.mount: Deactivated successfully. Nov 6 00:22:45.640014 containerd[1607]: time="2025-11-06T00:22:45.639937494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:45.641457 containerd[1607]: time="2025-11-06T00:22:45.641193660Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378491" Nov 6 00:22:45.642588 containerd[1607]: time="2025-11-06T00:22:45.642552469Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:45.645425 containerd[1607]: time="2025-11-06T00:22:45.645389960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:22:45.647300 containerd[1607]: time="2025-11-06T00:22:45.647255430Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.967415304s" Nov 6 00:22:45.647300 containerd[1607]: time="2025-11-06T00:22:45.647295875Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 00:22:47.283718 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 00:22:47.288300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:47.472152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:47.480260 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 00:22:47.516004 kubelet[2309]: E1106 00:22:47.515954 2309 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 00:22:47.518299 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 00:22:47.518416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 00:22:47.518832 systemd[1]: kubelet.service: Consumed 163ms CPU time, 111.4M memory peak. Nov 6 00:22:49.457324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:49.457715 systemd[1]: kubelet.service: Consumed 163ms CPU time, 111.4M memory peak. Nov 6 00:22:49.461461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:49.505514 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-7.scope)... Nov 6 00:22:49.505535 systemd[1]: Reloading... Nov 6 00:22:49.607266 zram_generator::config[2370]: No configuration found. Nov 6 00:22:49.806179 systemd[1]: Reloading finished in 300 ms. Nov 6 00:22:49.857867 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 00:22:49.857932 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 00:22:49.858142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:49.858285 systemd[1]: kubelet.service: Consumed 109ms CPU time, 98.4M memory peak. Nov 6 00:22:49.860866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:49.979231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:49.992541 (kubelet)[2421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:22:50.036948 kubelet[2421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:50.036948 kubelet[2421]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:22:50.036948 kubelet[2421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:50.037512 kubelet[2421]: I1106 00:22:50.036957 2421 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:22:50.548070 kubelet[2421]: I1106 00:22:50.547826 2421 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:22:50.548070 kubelet[2421]: I1106 00:22:50.547885 2421 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:22:50.548400 kubelet[2421]: I1106 00:22:50.548363 2421 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:22:50.601112 kubelet[2421]: I1106 00:22:50.600727 2421 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:22:50.604594 kubelet[2421]: E1106 00:22:50.604133 2421 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://135.181.151.25:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 00:22:50.620841 kubelet[2421]: I1106 00:22:50.620795 2421 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:22:50.631608 kubelet[2421]: I1106 00:22:50.631311 2421 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:22:50.634262 kubelet[2421]: I1106 00:22:50.634179 2421 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:22:50.638401 kubelet[2421]: I1106 00:22:50.634213 2421 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-n-bff22aa786","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:22:50.638401 kubelet[2421]: I1106 00:22:50.638377 2421 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:22:50.638401 kubelet[2421]: I1106 00:22:50.638391 2421 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:22:50.640003 kubelet[2421]: I1106 00:22:50.639954 2421 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:50.644096 kubelet[2421]: I1106 00:22:50.643680 2421 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:22:50.644096 kubelet[2421]: I1106 00:22:50.643703 2421 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:22:50.644096 kubelet[2421]: I1106 00:22:50.643731 2421 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:22:50.644096 kubelet[2421]: I1106 00:22:50.643746 2421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:22:50.663092 kubelet[2421]: E1106 00:22:50.662190 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.151.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-n-bff22aa786&limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:22:50.663092 kubelet[2421]: I1106 00:22:50.662351 2421 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:22:50.663242 kubelet[2421]: I1106 00:22:50.663151 2421 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:22:50.664374 kubelet[2421]: W1106 00:22:50.664311 2421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 00:22:50.668054 kubelet[2421]: E1106 00:22:50.668015 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.151.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:22:50.671660 kubelet[2421]: I1106 00:22:50.671623 2421 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:22:50.671750 kubelet[2421]: I1106 00:22:50.671719 2421 server.go:1289] "Started kubelet" Nov 6 00:22:50.673054 kubelet[2421]: I1106 00:22:50.672680 2421 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:22:50.673836 kubelet[2421]: I1106 00:22:50.673823 2421 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:22:50.680484 kubelet[2421]: I1106 00:22:50.679982 2421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:22:50.680932 kubelet[2421]: I1106 00:22:50.680864 2421 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:22:50.685325 kubelet[2421]: I1106 00:22:50.683374 2421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:22:50.686618 kubelet[2421]: E1106 00:22:50.680989 2421 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.151.25:6443/api/v1/namespaces/default/events\": dial tcp 135.181.151.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-n-bff22aa786.187543114717baac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-n-bff22aa786,UID:ci-4459-1-0-n-bff22aa786,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-n-bff22aa786,},FirstTimestamp:2025-11-06 00:22:50.671659692 +0000 UTC m=+0.672233672,LastTimestamp:2025-11-06 00:22:50.671659692 +0000 UTC m=+0.672233672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-n-bff22aa786,}" Nov 6 00:22:50.686870 kubelet[2421]: I1106 00:22:50.686759 2421 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:22:50.690108 kubelet[2421]: I1106 00:22:50.690083 2421 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:22:50.690903 kubelet[2421]: E1106 00:22:50.690876 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:50.697405 kubelet[2421]: I1106 00:22:50.697248 2421 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:22:50.699653 kubelet[2421]: I1106 00:22:50.698339 2421 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:22:50.700096 kubelet[2421]: I1106 00:22:50.699625 2421 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:22:50.704762 kubelet[2421]: E1106 00:22:50.704727 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.151.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-n-bff22aa786?timeout=10s\": dial tcp 135.181.151.25:6443: connect: connection refused" interval="200ms" Nov 6 00:22:50.705416 kubelet[2421]: E1106 00:22:50.705378 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.151.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:22:50.706729 kubelet[2421]: I1106 00:22:50.705754 2421 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:22:50.706729 kubelet[2421]: I1106 00:22:50.705816 2421 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:22:50.707893 kubelet[2421]: I1106 00:22:50.707760 2421 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:22:50.708308 kubelet[2421]: E1106 00:22:50.708283 2421 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:22:50.725231 kubelet[2421]: I1106 00:22:50.725188 2421 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:22:50.725231 kubelet[2421]: I1106 00:22:50.725202 2421 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:22:50.725376 kubelet[2421]: I1106 00:22:50.725280 2421 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:22:50.725376 kubelet[2421]: I1106 00:22:50.725319 2421 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:50.726297 kubelet[2421]: I1106 00:22:50.726105 2421 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:22:50.726297 kubelet[2421]: I1106 00:22:50.726128 2421 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:22:50.726297 kubelet[2421]: I1106 00:22:50.726134 2421 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:22:50.726297 kubelet[2421]: E1106 00:22:50.726199 2421 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:22:50.727437 kubelet[2421]: E1106 00:22:50.727403 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://135.181.151.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:22:50.727809 kubelet[2421]: I1106 00:22:50.727779 2421 policy_none.go:49] "None policy: Start" Nov 6 00:22:50.727809 kubelet[2421]: I1106 00:22:50.727798 2421 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:22:50.727809 kubelet[2421]: I1106 00:22:50.727807 2421 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:22:50.733747 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 00:22:50.741356 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 00:22:50.768874 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 00:22:50.770114 kubelet[2421]: E1106 00:22:50.770068 2421 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:22:50.771283 kubelet[2421]: I1106 00:22:50.771231 2421 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:22:50.772495 kubelet[2421]: I1106 00:22:50.772429 2421 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:22:50.772712 kubelet[2421]: I1106 00:22:50.772650 2421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:22:50.773688 kubelet[2421]: E1106 00:22:50.773588 2421 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:22:50.773688 kubelet[2421]: E1106 00:22:50.773620 2421 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:50.845606 systemd[1]: Created slice kubepods-burstable-pod13a28541a34eaa4017b52a45c336ce12.slice - libcontainer container kubepods-burstable-pod13a28541a34eaa4017b52a45c336ce12.slice. Nov 6 00:22:50.864510 kubelet[2421]: E1106 00:22:50.864414 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:50.870474 systemd[1]: Created slice kubepods-burstable-pod669c41a4d6cf9d26c3e91ec9753f34aa.slice - libcontainer container kubepods-burstable-pod669c41a4d6cf9d26c3e91ec9753f34aa.slice. Nov 6 00:22:50.874705 kubelet[2421]: I1106 00:22:50.874610 2421 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:50.875170 kubelet[2421]: E1106 00:22:50.875095 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.151.25:6443/api/v1/nodes\": dial tcp 135.181.151.25:6443: connect: connection refused" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:50.879025 kubelet[2421]: E1106 00:22:50.878956 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:50.884295 systemd[1]: Created slice kubepods-burstable-pod30b4c9314c0c99c0a194810b3f72a5e8.slice - libcontainer container kubepods-burstable-pod30b4c9314c0c99c0a194810b3f72a5e8.slice. Nov 6 00:22:50.888199 kubelet[2421]: E1106 00:22:50.888150 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:50.906178 kubelet[2421]: E1106 00:22:50.906098 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.151.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-n-bff22aa786?timeout=10s\": dial tcp 135.181.151.25:6443: connect: connection refused" interval="400ms" Nov 6 00:22:51.001013 kubelet[2421]: I1106 00:22:51.000847 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001013 kubelet[2421]: I1106 00:22:51.000921 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13a28541a34eaa4017b52a45c336ce12-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" (UID: \"13a28541a34eaa4017b52a45c336ce12\") " pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001013 kubelet[2421]: I1106 00:22:51.000954 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13a28541a34eaa4017b52a45c336ce12-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" (UID: \"13a28541a34eaa4017b52a45c336ce12\") " pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001347 kubelet[2421]: I1106 00:22:51.001095 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13a28541a34eaa4017b52a45c336ce12-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" (UID: \"13a28541a34eaa4017b52a45c336ce12\") " pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001347 kubelet[2421]: I1106 00:22:51.001250 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001347 kubelet[2421]: I1106 00:22:51.001304 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001473 kubelet[2421]: I1106 00:22:51.001359 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001473 kubelet[2421]: I1106 00:22:51.001397 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.001473 kubelet[2421]: I1106 00:22:51.001452 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30b4c9314c0c99c0a194810b3f72a5e8-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-n-bff22aa786\" (UID: \"30b4c9314c0c99c0a194810b3f72a5e8\") " pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.078618 kubelet[2421]: I1106 00:22:51.078553 2421 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.079174 kubelet[2421]: E1106 00:22:51.078958 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.151.25:6443/api/v1/nodes\": dial tcp 135.181.151.25:6443: connect: connection refused" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.167110 containerd[1607]: time="2025-11-06T00:22:51.166888496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-n-bff22aa786,Uid:13a28541a34eaa4017b52a45c336ce12,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:51.180812 containerd[1607]: time="2025-11-06T00:22:51.180565617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-n-bff22aa786,Uid:669c41a4d6cf9d26c3e91ec9753f34aa,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:51.199471 containerd[1607]: time="2025-11-06T00:22:51.199357639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-n-bff22aa786,Uid:30b4c9314c0c99c0a194810b3f72a5e8,Namespace:kube-system,Attempt:0,}" Nov 6 00:22:51.309295 kubelet[2421]: E1106 00:22:51.309210 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://135.181.151.25:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-1-0-n-bff22aa786?timeout=10s\": dial tcp 135.181.151.25:6443: connect: connection refused" interval="800ms" Nov 6 00:22:51.330251 containerd[1607]: time="2025-11-06T00:22:51.330192262Z" level=info msg="connecting to shim 8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8" address="unix:///run/containerd/s/c8265540af6fb7b1006f1bf13343d6459369071c9bf03a7d18f1fafc8ece9153" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:51.330788 containerd[1607]: time="2025-11-06T00:22:51.330523196Z" level=info msg="connecting to shim 151e51f7e57208c6ea8eb5f845bca07451f8e1512284a6616237649751ab821b" address="unix:///run/containerd/s/3e8f0f8650540ea878c3dd470f472c6c435787b0141ad81ff69916fc8b1dde43" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:51.337219 containerd[1607]: time="2025-11-06T00:22:51.337190633Z" level=info msg="connecting to shim 2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed" address="unix:///run/containerd/s/e692baeb3420f56337baacb24cbb562038c92f296f94ddbe00e51e214034ddc7" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:22:51.456232 systemd[1]: Started cri-containerd-151e51f7e57208c6ea8eb5f845bca07451f8e1512284a6616237649751ab821b.scope - libcontainer container 151e51f7e57208c6ea8eb5f845bca07451f8e1512284a6616237649751ab821b. Nov 6 00:22:51.457523 systemd[1]: Started cri-containerd-2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed.scope - libcontainer container 2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed. Nov 6 00:22:51.458663 systemd[1]: Started cri-containerd-8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8.scope - libcontainer container 8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8. Nov 6 00:22:51.484055 kubelet[2421]: I1106 00:22:51.483966 2421 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.486788 kubelet[2421]: E1106 00:22:51.486731 2421 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://135.181.151.25:6443/api/v1/nodes\": dial tcp 135.181.151.25:6443: connect: connection refused" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.537504 containerd[1607]: time="2025-11-06T00:22:51.537164189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-1-0-n-bff22aa786,Uid:30b4c9314c0c99c0a194810b3f72a5e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed\"" Nov 6 00:22:51.545981 containerd[1607]: time="2025-11-06T00:22:51.545950741Z" level=info msg="CreateContainer within sandbox \"2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 00:22:51.550922 kubelet[2421]: E1106 00:22:51.550895 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://135.181.151.25:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 00:22:51.551394 containerd[1607]: time="2025-11-06T00:22:51.551372351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-1-0-n-bff22aa786,Uid:669c41a4d6cf9d26c3e91ec9753f34aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8\"" Nov 6 00:22:51.553286 containerd[1607]: time="2025-11-06T00:22:51.553243848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-1-0-n-bff22aa786,Uid:13a28541a34eaa4017b52a45c336ce12,Namespace:kube-system,Attempt:0,} returns sandbox id \"151e51f7e57208c6ea8eb5f845bca07451f8e1512284a6616237649751ab821b\"" Nov 6 00:22:51.555680 containerd[1607]: time="2025-11-06T00:22:51.555663389Z" level=info msg="CreateContainer within sandbox \"8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 00:22:51.557461 containerd[1607]: time="2025-11-06T00:22:51.557421112Z" level=info msg="CreateContainer within sandbox \"151e51f7e57208c6ea8eb5f845bca07451f8e1512284a6616237649751ab821b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 00:22:51.565254 containerd[1607]: time="2025-11-06T00:22:51.565225963Z" level=info msg="Container a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:51.567543 containerd[1607]: time="2025-11-06T00:22:51.567452320Z" level=info msg="Container e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:51.569998 kubelet[2421]: E1106 00:22:51.569958 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://135.181.151.25:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 00:22:51.570623 containerd[1607]: time="2025-11-06T00:22:51.570610122Z" level=info msg="Container 175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:22:51.579732 containerd[1607]: time="2025-11-06T00:22:51.579645233Z" level=info msg="CreateContainer within sandbox \"8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\"" Nov 6 00:22:51.580529 containerd[1607]: time="2025-11-06T00:22:51.580514481Z" level=info msg="StartContainer for \"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\"" Nov 6 00:22:51.581683 containerd[1607]: time="2025-11-06T00:22:51.581666473Z" level=info msg="connecting to shim e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830" address="unix:///run/containerd/s/c8265540af6fb7b1006f1bf13343d6459369071c9bf03a7d18f1fafc8ece9153" protocol=ttrpc version=3 Nov 6 00:22:51.586857 containerd[1607]: time="2025-11-06T00:22:51.586804478Z" level=info msg="CreateContainer within sandbox \"151e51f7e57208c6ea8eb5f845bca07451f8e1512284a6616237649751ab821b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31\"" Nov 6 00:22:51.587677 containerd[1607]: time="2025-11-06T00:22:51.587659600Z" level=info msg="StartContainer for \"175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31\"" Nov 6 00:22:51.589695 containerd[1607]: time="2025-11-06T00:22:51.589513063Z" level=info msg="connecting to shim 175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31" address="unix:///run/containerd/s/3e8f0f8650540ea878c3dd470f472c6c435787b0141ad81ff69916fc8b1dde43" protocol=ttrpc version=3 Nov 6 00:22:51.589742 containerd[1607]: time="2025-11-06T00:22:51.589632428Z" level=info msg="CreateContainer within sandbox \"2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\"" Nov 6 00:22:51.590388 containerd[1607]: time="2025-11-06T00:22:51.590362815Z" level=info msg="StartContainer for \"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\"" Nov 6 00:22:51.591902 containerd[1607]: time="2025-11-06T00:22:51.591876247Z" level=info msg="connecting to shim a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e" address="unix:///run/containerd/s/e692baeb3420f56337baacb24cbb562038c92f296f94ddbe00e51e214034ddc7" protocol=ttrpc version=3 Nov 6 00:22:51.599158 systemd[1]: Started cri-containerd-e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830.scope - libcontainer container e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830. Nov 6 00:22:51.616316 systemd[1]: Started cri-containerd-a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e.scope - libcontainer container a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e. Nov 6 00:22:51.620279 systemd[1]: Started cri-containerd-175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31.scope - libcontainer container 175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31. Nov 6 00:22:51.675705 containerd[1607]: time="2025-11-06T00:22:51.675557350Z" level=info msg="StartContainer for \"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\" returns successfully" Nov 6 00:22:51.680081 containerd[1607]: time="2025-11-06T00:22:51.679930213Z" level=info msg="StartContainer for \"175beab4e68ae22c42909b384cab74e89c834c28928b90a7ca33115a846a2d31\" returns successfully" Nov 6 00:22:51.697836 containerd[1607]: time="2025-11-06T00:22:51.697792071Z" level=info msg="StartContainer for \"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\" returns successfully" Nov 6 00:22:51.738196 kubelet[2421]: E1106 00:22:51.737530 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.741655 kubelet[2421]: E1106 00:22:51.741445 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.744130 kubelet[2421]: E1106 00:22:51.744120 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:51.745438 kubelet[2421]: E1106 00:22:51.745353 2421 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://135.181.151.25:6443/api/v1/namespaces/default/events\": dial tcp 135.181.151.25:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-1-0-n-bff22aa786.187543114717baac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-1-0-n-bff22aa786,UID:ci-4459-1-0-n-bff22aa786,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-n-bff22aa786,},FirstTimestamp:2025-11-06 00:22:50.671659692 +0000 UTC m=+0.672233672,LastTimestamp:2025-11-06 00:22:50.671659692 +0000 UTC m=+0.672233672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-n-bff22aa786,}" Nov 6 00:22:51.881720 kubelet[2421]: E1106 00:22:51.881542 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://135.181.151.25:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 00:22:51.927051 kubelet[2421]: E1106 00:22:51.926756 2421 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://135.181.151.25:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-1-0-n-bff22aa786&limit=500&resourceVersion=0\": dial tcp 135.181.151.25:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 00:22:52.289491 kubelet[2421]: I1106 00:22:52.289447 2421 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:52.746851 kubelet[2421]: E1106 00:22:52.746645 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:52.747602 kubelet[2421]: E1106 00:22:52.747530 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:54.158214 kubelet[2421]: E1106 00:22:54.158175 2421 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:54.284121 kubelet[2421]: I1106 00:22:54.283983 2421 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:54.284121 kubelet[2421]: E1106 00:22:54.284013 2421 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-1-0-n-bff22aa786\": node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.299940 kubelet[2421]: E1106 00:22:54.299910 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.400134 kubelet[2421]: E1106 00:22:54.400056 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.501000 kubelet[2421]: E1106 00:22:54.500851 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.601738 kubelet[2421]: E1106 00:22:54.601607 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.630868 kubelet[2421]: E1106 00:22:54.630595 2421 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-1-0-n-bff22aa786\" not found" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:54.701809 kubelet[2421]: E1106 00:22:54.701736 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.801893 kubelet[2421]: E1106 00:22:54.801830 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.902944 kubelet[2421]: E1106 00:22:54.902869 2421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-1-0-n-bff22aa786\" not found" Nov 6 00:22:54.999179 kubelet[2421]: I1106 00:22:54.999119 2421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:55.010263 kubelet[2421]: E1106 00:22:55.010195 2421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:55.010263 kubelet[2421]: I1106 00:22:55.010250 2421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:55.013270 kubelet[2421]: E1106 00:22:55.013204 2421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-n-bff22aa786\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:55.013270 kubelet[2421]: I1106 00:22:55.013244 2421 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:55.015863 kubelet[2421]: E1106 00:22:55.015815 2421 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:55.666426 kubelet[2421]: I1106 00:22:55.666357 2421 apiserver.go:52] "Watching apiserver" Nov 6 00:22:55.701041 kubelet[2421]: I1106 00:22:55.700988 2421 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:22:56.478278 systemd[1]: Reload requested from client PID 2703 ('systemctl') (unit session-7.scope)... Nov 6 00:22:56.478320 systemd[1]: Reloading... Nov 6 00:22:56.608073 zram_generator::config[2744]: No configuration found. Nov 6 00:22:56.820258 systemd[1]: Reloading finished in 341 ms. Nov 6 00:22:56.854751 kubelet[2421]: I1106 00:22:56.854673 2421 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:22:56.856234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:56.871840 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 00:22:56.872053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:56.872104 systemd[1]: kubelet.service: Consumed 1.029s CPU time, 129.5M memory peak. Nov 6 00:22:56.874391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 00:22:57.006916 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 00:22:57.020078 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 00:22:57.082908 kubelet[2798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:57.082908 kubelet[2798]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 00:22:57.082908 kubelet[2798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 00:22:57.083524 kubelet[2798]: I1106 00:22:57.082924 2798 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 00:22:57.091802 kubelet[2798]: I1106 00:22:57.091754 2798 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 00:22:57.091802 kubelet[2798]: I1106 00:22:57.091773 2798 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 00:22:57.091997 kubelet[2798]: I1106 00:22:57.091930 2798 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 00:22:57.092868 kubelet[2798]: I1106 00:22:57.092851 2798 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 00:22:57.094576 kubelet[2798]: I1106 00:22:57.094538 2798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 00:22:57.097728 kubelet[2798]: I1106 00:22:57.097696 2798 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 00:22:57.100006 kubelet[2798]: I1106 00:22:57.099970 2798 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 00:22:57.100214 kubelet[2798]: I1106 00:22:57.100153 2798 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 00:22:57.100327 kubelet[2798]: I1106 00:22:57.100177 2798 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-1-0-n-bff22aa786","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 00:22:57.100327 kubelet[2798]: I1106 00:22:57.100291 2798 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 00:22:57.100327 kubelet[2798]: I1106 00:22:57.100311 2798 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 00:22:57.100575 kubelet[2798]: I1106 00:22:57.100353 2798 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:57.100575 kubelet[2798]: I1106 00:22:57.100490 2798 kubelet.go:480] "Attempting to sync node with API server" Nov 6 00:22:57.100575 kubelet[2798]: I1106 00:22:57.100501 2798 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 00:22:57.100575 kubelet[2798]: I1106 00:22:57.100518 2798 kubelet.go:386] "Adding apiserver pod source" Nov 6 00:22:57.110201 kubelet[2798]: I1106 00:22:57.108428 2798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 00:22:57.115387 kubelet[2798]: I1106 00:22:57.115363 2798 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 6 00:22:57.115720 kubelet[2798]: I1106 00:22:57.115702 2798 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 00:22:57.119484 kubelet[2798]: I1106 00:22:57.119092 2798 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 00:22:57.119484 kubelet[2798]: I1106 00:22:57.119133 2798 server.go:1289] "Started kubelet" Nov 6 00:22:57.119484 kubelet[2798]: I1106 00:22:57.119192 2798 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 00:22:57.119994 kubelet[2798]: I1106 00:22:57.119805 2798 server.go:317] "Adding debug handlers to kubelet server" Nov 6 00:22:57.120475 kubelet[2798]: I1106 00:22:57.120406 2798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 00:22:57.120669 kubelet[2798]: I1106 00:22:57.120598 2798 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 00:22:57.123624 kubelet[2798]: I1106 00:22:57.122562 2798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 00:22:57.131929 kubelet[2798]: I1106 00:22:57.131138 2798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 00:22:57.132093 kubelet[2798]: I1106 00:22:57.132085 2798 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 00:22:57.132497 kubelet[2798]: I1106 00:22:57.132486 2798 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 00:22:57.132618 kubelet[2798]: I1106 00:22:57.132611 2798 reconciler.go:26] "Reconciler: start to sync state" Nov 6 00:22:57.136366 kubelet[2798]: E1106 00:22:57.136061 2798 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 00:22:57.139459 kubelet[2798]: I1106 00:22:57.139442 2798 factory.go:223] Registration of the containerd container factory successfully Nov 6 00:22:57.139824 kubelet[2798]: I1106 00:22:57.139815 2798 factory.go:223] Registration of the systemd container factory successfully Nov 6 00:22:57.140741 kubelet[2798]: I1106 00:22:57.140565 2798 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 00:22:57.150139 kubelet[2798]: I1106 00:22:57.150116 2798 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 00:22:57.153174 kubelet[2798]: I1106 00:22:57.153161 2798 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 00:22:57.153260 kubelet[2798]: I1106 00:22:57.153254 2798 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 00:22:57.153343 kubelet[2798]: I1106 00:22:57.153336 2798 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 00:22:57.153386 kubelet[2798]: I1106 00:22:57.153381 2798 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 00:22:57.153695 kubelet[2798]: E1106 00:22:57.153681 2798 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 00:22:57.183454 kubelet[2798]: I1106 00:22:57.183419 2798 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 00:22:57.183454 kubelet[2798]: I1106 00:22:57.183442 2798 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 00:22:57.183454 kubelet[2798]: I1106 00:22:57.183464 2798 state_mem.go:36] "Initialized new in-memory state store" Nov 6 00:22:57.183614 kubelet[2798]: I1106 00:22:57.183606 2798 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 00:22:57.183634 kubelet[2798]: I1106 00:22:57.183618 2798 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 00:22:57.183664 kubelet[2798]: I1106 00:22:57.183638 2798 policy_none.go:49] "None policy: Start" Nov 6 00:22:57.183664 kubelet[2798]: I1106 00:22:57.183649 2798 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 00:22:57.183664 kubelet[2798]: I1106 00:22:57.183661 2798 state_mem.go:35] "Initializing new in-memory state store" Nov 6 00:22:57.183786 kubelet[2798]: I1106 00:22:57.183767 2798 state_mem.go:75] "Updated machine memory state" Nov 6 00:22:57.187931 kubelet[2798]: E1106 00:22:57.187906 2798 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 00:22:57.188358 kubelet[2798]: I1106 00:22:57.188090 2798 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 00:22:57.188358 kubelet[2798]: I1106 00:22:57.188104 2798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 00:22:57.188358 kubelet[2798]: I1106 00:22:57.188313 2798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 00:22:57.190415 kubelet[2798]: E1106 00:22:57.189892 2798 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 00:22:57.255401 kubelet[2798]: I1106 00:22:57.255364 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.255723 kubelet[2798]: I1106 00:22:57.255390 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.255857 kubelet[2798]: I1106 00:22:57.255552 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.294669 kubelet[2798]: I1106 00:22:57.294622 2798 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.305690 kubelet[2798]: I1106 00:22:57.305605 2798 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.305843 kubelet[2798]: I1106 00:22:57.305706 2798 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334275 kubelet[2798]: I1106 00:22:57.334131 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30b4c9314c0c99c0a194810b3f72a5e8-kubeconfig\") pod \"kube-scheduler-ci-4459-1-0-n-bff22aa786\" (UID: \"30b4c9314c0c99c0a194810b3f72a5e8\") " pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334275 kubelet[2798]: I1106 00:22:57.334178 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/13a28541a34eaa4017b52a45c336ce12-ca-certs\") pod \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" (UID: \"13a28541a34eaa4017b52a45c336ce12\") " pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334275 kubelet[2798]: I1106 00:22:57.334204 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334275 kubelet[2798]: I1106 00:22:57.334231 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-k8s-certs\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334275 kubelet[2798]: I1106 00:22:57.334257 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-kubeconfig\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334604 kubelet[2798]: I1106 00:22:57.334282 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334604 kubelet[2798]: I1106 00:22:57.334326 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/13a28541a34eaa4017b52a45c336ce12-k8s-certs\") pod \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" (UID: \"13a28541a34eaa4017b52a45c336ce12\") " pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334604 kubelet[2798]: I1106 00:22:57.334355 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/13a28541a34eaa4017b52a45c336ce12-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" (UID: \"13a28541a34eaa4017b52a45c336ce12\") " pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.334604 kubelet[2798]: I1106 00:22:57.334380 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/669c41a4d6cf9d26c3e91ec9753f34aa-ca-certs\") pod \"kube-controller-manager-ci-4459-1-0-n-bff22aa786\" (UID: \"669c41a4d6cf9d26c3e91ec9753f34aa\") " pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:57.633154 update_engine[1578]: I20251106 00:22:57.632498 1578 update_attempter.cc:509] Updating boot flags... Nov 6 00:22:58.111659 kubelet[2798]: I1106 00:22:58.111604 2798 apiserver.go:52] "Watching apiserver" Nov 6 00:22:58.133161 kubelet[2798]: I1106 00:22:58.133105 2798 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 00:22:58.181427 kubelet[2798]: I1106 00:22:58.181334 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:58.181683 kubelet[2798]: I1106 00:22:58.181508 2798 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:58.223769 kubelet[2798]: E1106 00:22:58.223732 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-1-0-n-bff22aa786\" already exists" pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:58.231974 kubelet[2798]: E1106 00:22:58.231936 2798 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-1-0-n-bff22aa786\" already exists" pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" Nov 6 00:22:58.293701 kubelet[2798]: I1106 00:22:58.293642 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-1-0-n-bff22aa786" podStartSLOduration=1.29362361 podStartE2EDuration="1.29362361s" podCreationTimestamp="2025-11-06 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:58.260470631 +0000 UTC m=+1.232304483" watchObservedRunningTime="2025-11-06 00:22:58.29362361 +0000 UTC m=+1.265457452" Nov 6 00:22:58.303804 kubelet[2798]: I1106 00:22:58.303750 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-1-0-n-bff22aa786" podStartSLOduration=1.303579933 podStartE2EDuration="1.303579933s" podCreationTimestamp="2025-11-06 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:58.295082665 +0000 UTC m=+1.266916507" watchObservedRunningTime="2025-11-06 00:22:58.303579933 +0000 UTC m=+1.275413775" Nov 6 00:22:58.322607 kubelet[2798]: I1106 00:22:58.322526 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-1-0-n-bff22aa786" podStartSLOduration=1.322501955 podStartE2EDuration="1.322501955s" podCreationTimestamp="2025-11-06 00:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:22:58.30440703 +0000 UTC m=+1.276240871" watchObservedRunningTime="2025-11-06 00:22:58.322501955 +0000 UTC m=+1.294335797" Nov 6 00:23:02.117881 kubelet[2798]: I1106 00:23:02.117808 2798 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 00:23:02.118747 containerd[1607]: time="2025-11-06T00:23:02.118686683Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 00:23:02.119712 kubelet[2798]: I1106 00:23:02.119380 2798 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 00:23:02.187997 systemd[1]: Created slice kubepods-besteffort-pod1e8e40dd_b580_49ea_8a92_eadbe90406e8.slice - libcontainer container kubepods-besteffort-pod1e8e40dd_b580_49ea_8a92_eadbe90406e8.slice. Nov 6 00:23:02.268587 kubelet[2798]: I1106 00:23:02.268494 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmfsj\" (UniqueName: \"kubernetes.io/projected/1e8e40dd-b580-49ea-8a92-eadbe90406e8-kube-api-access-lmfsj\") pod \"kube-proxy-cqv4v\" (UID: \"1e8e40dd-b580-49ea-8a92-eadbe90406e8\") " pod="kube-system/kube-proxy-cqv4v" Nov 6 00:23:02.268587 kubelet[2798]: I1106 00:23:02.268549 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1e8e40dd-b580-49ea-8a92-eadbe90406e8-kube-proxy\") pod \"kube-proxy-cqv4v\" (UID: \"1e8e40dd-b580-49ea-8a92-eadbe90406e8\") " pod="kube-system/kube-proxy-cqv4v" Nov 6 00:23:02.268587 kubelet[2798]: I1106 00:23:02.268575 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e8e40dd-b580-49ea-8a92-eadbe90406e8-xtables-lock\") pod \"kube-proxy-cqv4v\" (UID: \"1e8e40dd-b580-49ea-8a92-eadbe90406e8\") " pod="kube-system/kube-proxy-cqv4v" Nov 6 00:23:02.268587 kubelet[2798]: I1106 00:23:02.268592 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e8e40dd-b580-49ea-8a92-eadbe90406e8-lib-modules\") pod \"kube-proxy-cqv4v\" (UID: \"1e8e40dd-b580-49ea-8a92-eadbe90406e8\") " pod="kube-system/kube-proxy-cqv4v" Nov 6 00:23:02.381612 kubelet[2798]: E1106 00:23:02.381353 2798 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 00:23:02.381612 kubelet[2798]: E1106 00:23:02.381401 2798 projected.go:194] Error preparing data for projected volume kube-api-access-lmfsj for pod kube-system/kube-proxy-cqv4v: configmap "kube-root-ca.crt" not found Nov 6 00:23:02.381612 kubelet[2798]: E1106 00:23:02.381506 2798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e8e40dd-b580-49ea-8a92-eadbe90406e8-kube-api-access-lmfsj podName:1e8e40dd-b580-49ea-8a92-eadbe90406e8 nodeName:}" failed. No retries permitted until 2025-11-06 00:23:02.881474975 +0000 UTC m=+5.853308847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lmfsj" (UniqueName: "kubernetes.io/projected/1e8e40dd-b580-49ea-8a92-eadbe90406e8-kube-api-access-lmfsj") pod "kube-proxy-cqv4v" (UID: "1e8e40dd-b580-49ea-8a92-eadbe90406e8") : configmap "kube-root-ca.crt" not found Nov 6 00:23:02.975096 kubelet[2798]: E1106 00:23:02.975014 2798 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 00:23:02.975352 kubelet[2798]: E1106 00:23:02.975115 2798 projected.go:194] Error preparing data for projected volume kube-api-access-lmfsj for pod kube-system/kube-proxy-cqv4v: configmap "kube-root-ca.crt" not found Nov 6 00:23:02.975352 kubelet[2798]: E1106 00:23:02.975221 2798 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e8e40dd-b580-49ea-8a92-eadbe90406e8-kube-api-access-lmfsj podName:1e8e40dd-b580-49ea-8a92-eadbe90406e8 nodeName:}" failed. No retries permitted until 2025-11-06 00:23:03.975197284 +0000 UTC m=+6.947031165 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lmfsj" (UniqueName: "kubernetes.io/projected/1e8e40dd-b580-49ea-8a92-eadbe90406e8-kube-api-access-lmfsj") pod "kube-proxy-cqv4v" (UID: "1e8e40dd-b580-49ea-8a92-eadbe90406e8") : configmap "kube-root-ca.crt" not found Nov 6 00:23:03.377989 kubelet[2798]: I1106 00:23:03.377822 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c4dad300-dc01-400d-8c3a-e2e7c883cd64-var-lib-calico\") pod \"tigera-operator-7dcd859c48-sq6jd\" (UID: \"c4dad300-dc01-400d-8c3a-e2e7c883cd64\") " pod="tigera-operator/tigera-operator-7dcd859c48-sq6jd" Nov 6 00:23:03.380185 kubelet[2798]: I1106 00:23:03.380154 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s5xl\" (UniqueName: \"kubernetes.io/projected/c4dad300-dc01-400d-8c3a-e2e7c883cd64-kube-api-access-6s5xl\") pod \"tigera-operator-7dcd859c48-sq6jd\" (UID: \"c4dad300-dc01-400d-8c3a-e2e7c883cd64\") " pod="tigera-operator/tigera-operator-7dcd859c48-sq6jd" Nov 6 00:23:03.384342 systemd[1]: Created slice kubepods-besteffort-podc4dad300_dc01_400d_8c3a_e2e7c883cd64.slice - libcontainer container kubepods-besteffort-podc4dad300_dc01_400d_8c3a_e2e7c883cd64.slice. Nov 6 00:23:03.690744 containerd[1607]: time="2025-11-06T00:23:03.690241813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sq6jd,Uid:c4dad300-dc01-400d-8c3a-e2e7c883cd64,Namespace:tigera-operator,Attempt:0,}" Nov 6 00:23:03.722932 containerd[1607]: time="2025-11-06T00:23:03.722847047Z" level=info msg="connecting to shim 7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287" address="unix:///run/containerd/s/7ebe8692df3c36d9b51257d942f3e5a7abf821abfe7e1fa7116cff4392e861ef" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:03.763267 systemd[1]: Started cri-containerd-7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287.scope - libcontainer container 7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287. Nov 6 00:23:03.840472 containerd[1607]: time="2025-11-06T00:23:03.840387882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-sq6jd,Uid:c4dad300-dc01-400d-8c3a-e2e7c883cd64,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287\"" Nov 6 00:23:03.843353 containerd[1607]: time="2025-11-06T00:23:03.843294396Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 00:23:04.004016 containerd[1607]: time="2025-11-06T00:23:04.003799789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqv4v,Uid:1e8e40dd-b580-49ea-8a92-eadbe90406e8,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:04.046128 containerd[1607]: time="2025-11-06T00:23:04.045835080Z" level=info msg="connecting to shim 6c98b354c330aaf517c187bc1bb7006bf07a35aacdcd069afcdc2e52ee1f291d" address="unix:///run/containerd/s/60236af2968ad5399601010891c2d96bb4aa69ce6433e919ea9702f732ab5f8f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:04.089259 systemd[1]: Started cri-containerd-6c98b354c330aaf517c187bc1bb7006bf07a35aacdcd069afcdc2e52ee1f291d.scope - libcontainer container 6c98b354c330aaf517c187bc1bb7006bf07a35aacdcd069afcdc2e52ee1f291d. Nov 6 00:23:04.134224 containerd[1607]: time="2025-11-06T00:23:04.134149709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cqv4v,Uid:1e8e40dd-b580-49ea-8a92-eadbe90406e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c98b354c330aaf517c187bc1bb7006bf07a35aacdcd069afcdc2e52ee1f291d\"" Nov 6 00:23:04.140497 containerd[1607]: time="2025-11-06T00:23:04.140416378Z" level=info msg="CreateContainer within sandbox \"6c98b354c330aaf517c187bc1bb7006bf07a35aacdcd069afcdc2e52ee1f291d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 00:23:04.153472 containerd[1607]: time="2025-11-06T00:23:04.153419459Z" level=info msg="Container d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:04.166497 containerd[1607]: time="2025-11-06T00:23:04.166403364Z" level=info msg="CreateContainer within sandbox \"6c98b354c330aaf517c187bc1bb7006bf07a35aacdcd069afcdc2e52ee1f291d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12\"" Nov 6 00:23:04.167629 containerd[1607]: time="2025-11-06T00:23:04.167557954Z" level=info msg="StartContainer for \"d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12\"" Nov 6 00:23:04.171122 containerd[1607]: time="2025-11-06T00:23:04.170996017Z" level=info msg="connecting to shim d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12" address="unix:///run/containerd/s/60236af2968ad5399601010891c2d96bb4aa69ce6433e919ea9702f732ab5f8f" protocol=ttrpc version=3 Nov 6 00:23:04.201391 systemd[1]: Started cri-containerd-d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12.scope - libcontainer container d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12. Nov 6 00:23:04.267473 containerd[1607]: time="2025-11-06T00:23:04.266920970Z" level=info msg="StartContainer for \"d10c9a74f9e3327347f0337a6833e343a1077f927c7fa55cdf4cb8134dca9c12\" returns successfully" Nov 6 00:23:05.749566 kubelet[2798]: I1106 00:23:05.748996 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cqv4v" podStartSLOduration=3.748971129 podStartE2EDuration="3.748971129s" podCreationTimestamp="2025-11-06 00:23:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:05.228282037 +0000 UTC m=+8.200115918" watchObservedRunningTime="2025-11-06 00:23:05.748971129 +0000 UTC m=+8.720805011" Nov 6 00:23:05.963619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768144374.mount: Deactivated successfully. Nov 6 00:23:07.081014 containerd[1607]: time="2025-11-06T00:23:07.080947564Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:07.082070 containerd[1607]: time="2025-11-06T00:23:07.081984382Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 6 00:23:07.083078 containerd[1607]: time="2025-11-06T00:23:07.083040196Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:07.085108 containerd[1607]: time="2025-11-06T00:23:07.085071663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:07.085850 containerd[1607]: time="2025-11-06T00:23:07.085548118Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.242090504s" Nov 6 00:23:07.085850 containerd[1607]: time="2025-11-06T00:23:07.085582953Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 00:23:07.090715 containerd[1607]: time="2025-11-06T00:23:07.090690761Z" level=info msg="CreateContainer within sandbox \"7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 00:23:07.098119 containerd[1607]: time="2025-11-06T00:23:07.097660266Z" level=info msg="Container 0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:07.108462 containerd[1607]: time="2025-11-06T00:23:07.108434180Z" level=info msg="CreateContainer within sandbox \"7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\"" Nov 6 00:23:07.108859 containerd[1607]: time="2025-11-06T00:23:07.108838389Z" level=info msg="StartContainer for \"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\"" Nov 6 00:23:07.109405 containerd[1607]: time="2025-11-06T00:23:07.109379175Z" level=info msg="connecting to shim 0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111" address="unix:///run/containerd/s/7ebe8692df3c36d9b51257d942f3e5a7abf821abfe7e1fa7116cff4392e861ef" protocol=ttrpc version=3 Nov 6 00:23:07.132149 systemd[1]: Started cri-containerd-0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111.scope - libcontainer container 0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111. Nov 6 00:23:07.167189 containerd[1607]: time="2025-11-06T00:23:07.167149651Z" level=info msg="StartContainer for \"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\" returns successfully" Nov 6 00:23:10.614894 kubelet[2798]: I1106 00:23:10.614194 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-sq6jd" podStartSLOduration=4.370640192 podStartE2EDuration="7.614173048s" podCreationTimestamp="2025-11-06 00:23:03 +0000 UTC" firstStartedPulling="2025-11-06 00:23:03.842573341 +0000 UTC m=+6.814407213" lastFinishedPulling="2025-11-06 00:23:07.086106226 +0000 UTC m=+10.057940069" observedRunningTime="2025-11-06 00:23:07.233355156 +0000 UTC m=+10.205189018" watchObservedRunningTime="2025-11-06 00:23:10.614173048 +0000 UTC m=+13.586006930" Nov 6 00:23:13.307049 sudo[1851]: pam_unix(sudo:session): session closed for user root Nov 6 00:23:13.487925 sshd[1850]: Connection closed by 139.178.68.195 port 34556 Nov 6 00:23:13.490068 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Nov 6 00:23:13.492556 systemd-logind[1577]: Session 7 logged out. Waiting for processes to exit. Nov 6 00:23:13.493509 systemd[1]: sshd@6-135.181.151.25:22-139.178.68.195:34556.service: Deactivated successfully. Nov 6 00:23:13.495713 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 00:23:13.495936 systemd[1]: session-7.scope: Consumed 5.948s CPU time, 155.6M memory peak. Nov 6 00:23:13.498662 systemd-logind[1577]: Removed session 7. Nov 6 00:23:19.117739 systemd[1]: Created slice kubepods-besteffort-pod53630c4c_3033_4428_9646_f9fe77e55c2b.slice - libcontainer container kubepods-besteffort-pod53630c4c_3033_4428_9646_f9fe77e55c2b.slice. Nov 6 00:23:19.192001 kubelet[2798]: I1106 00:23:19.191954 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/53630c4c-3033-4428-9646-f9fe77e55c2b-typha-certs\") pod \"calico-typha-5994b5c59-6s7r6\" (UID: \"53630c4c-3033-4428-9646-f9fe77e55c2b\") " pod="calico-system/calico-typha-5994b5c59-6s7r6" Nov 6 00:23:19.192489 kubelet[2798]: I1106 00:23:19.192052 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7grf\" (UniqueName: \"kubernetes.io/projected/53630c4c-3033-4428-9646-f9fe77e55c2b-kube-api-access-q7grf\") pod \"calico-typha-5994b5c59-6s7r6\" (UID: \"53630c4c-3033-4428-9646-f9fe77e55c2b\") " pod="calico-system/calico-typha-5994b5c59-6s7r6" Nov 6 00:23:19.192489 kubelet[2798]: I1106 00:23:19.192072 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53630c4c-3033-4428-9646-f9fe77e55c2b-tigera-ca-bundle\") pod \"calico-typha-5994b5c59-6s7r6\" (UID: \"53630c4c-3033-4428-9646-f9fe77e55c2b\") " pod="calico-system/calico-typha-5994b5c59-6s7r6" Nov 6 00:23:19.364462 systemd[1]: Created slice kubepods-besteffort-podd6010ff1_00aa_4b99_90d2_17d88a8f628b.slice - libcontainer container kubepods-besteffort-podd6010ff1_00aa_4b99_90d2_17d88a8f628b.slice. Nov 6 00:23:19.394192 kubelet[2798]: I1106 00:23:19.393471 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d6010ff1-00aa-4b99-90d2-17d88a8f628b-node-certs\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394192 kubelet[2798]: I1106 00:23:19.393502 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6010ff1-00aa-4b99-90d2-17d88a8f628b-tigera-ca-bundle\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394192 kubelet[2798]: I1106 00:23:19.393514 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-var-lib-calico\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394192 kubelet[2798]: I1106 00:23:19.393528 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-flexvol-driver-host\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394192 kubelet[2798]: I1106 00:23:19.393544 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-lib-modules\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394449 kubelet[2798]: I1106 00:23:19.393556 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbpcw\" (UniqueName: \"kubernetes.io/projected/d6010ff1-00aa-4b99-90d2-17d88a8f628b-kube-api-access-kbpcw\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394449 kubelet[2798]: I1106 00:23:19.393571 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-cni-bin-dir\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394449 kubelet[2798]: I1106 00:23:19.393584 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-cni-log-dir\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394449 kubelet[2798]: I1106 00:23:19.393595 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-policysync\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394449 kubelet[2798]: I1106 00:23:19.393609 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-xtables-lock\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394596 kubelet[2798]: I1106 00:23:19.393620 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-cni-net-dir\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.394596 kubelet[2798]: I1106 00:23:19.393630 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d6010ff1-00aa-4b99-90d2-17d88a8f628b-var-run-calico\") pod \"calico-node-hjvm9\" (UID: \"d6010ff1-00aa-4b99-90d2-17d88a8f628b\") " pod="calico-system/calico-node-hjvm9" Nov 6 00:23:19.421384 containerd[1607]: time="2025-11-06T00:23:19.421319318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5994b5c59-6s7r6,Uid:53630c4c-3033-4428-9646-f9fe77e55c2b,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:19.478922 containerd[1607]: time="2025-11-06T00:23:19.478868776Z" level=info msg="connecting to shim 39e0ea51276a900a85347abf2ec01ab1e271c6a5bed5e67f6820b8d81898f311" address="unix:///run/containerd/s/d40583c4262d3296d42c8cd6cee839c8404fe27c111005896fced3a473c92428" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:19.499272 systemd[1]: Started cri-containerd-39e0ea51276a900a85347abf2ec01ab1e271c6a5bed5e67f6820b8d81898f311.scope - libcontainer container 39e0ea51276a900a85347abf2ec01ab1e271c6a5bed5e67f6820b8d81898f311. Nov 6 00:23:19.510947 kubelet[2798]: E1106 00:23:19.510875 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.511390 kubelet[2798]: W1106 00:23:19.511158 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.511390 kubelet[2798]: E1106 00:23:19.511191 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.543933 kubelet[2798]: E1106 00:23:19.543806 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:19.562780 containerd[1607]: time="2025-11-06T00:23:19.562526574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5994b5c59-6s7r6,Uid:53630c4c-3033-4428-9646-f9fe77e55c2b,Namespace:calico-system,Attempt:0,} returns sandbox id \"39e0ea51276a900a85347abf2ec01ab1e271c6a5bed5e67f6820b8d81898f311\"" Nov 6 00:23:19.565103 containerd[1607]: time="2025-11-06T00:23:19.564923782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 00:23:19.589492 kubelet[2798]: E1106 00:23:19.589476 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.589597 kubelet[2798]: W1106 00:23:19.589586 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.589686 kubelet[2798]: E1106 00:23:19.589642 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.589919 kubelet[2798]: E1106 00:23:19.589870 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.589919 kubelet[2798]: W1106 00:23:19.589878 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.589919 kubelet[2798]: E1106 00:23:19.589886 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.590177 kubelet[2798]: E1106 00:23:19.590168 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.590274 kubelet[2798]: W1106 00:23:19.590228 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.590274 kubelet[2798]: E1106 00:23:19.590237 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.590604 kubelet[2798]: E1106 00:23:19.590515 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.590604 kubelet[2798]: W1106 00:23:19.590522 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.590604 kubelet[2798]: E1106 00:23:19.590530 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.590781 kubelet[2798]: E1106 00:23:19.590733 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.590781 kubelet[2798]: W1106 00:23:19.590741 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.590781 kubelet[2798]: E1106 00:23:19.590747 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.591326 kubelet[2798]: E1106 00:23:19.590976 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.591326 kubelet[2798]: W1106 00:23:19.590984 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.591326 kubelet[2798]: E1106 00:23:19.590994 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.591326 kubelet[2798]: E1106 00:23:19.591201 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.591326 kubelet[2798]: W1106 00:23:19.591208 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.591326 kubelet[2798]: E1106 00:23:19.591226 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.591644 kubelet[2798]: E1106 00:23:19.591595 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.591644 kubelet[2798]: W1106 00:23:19.591603 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.591644 kubelet[2798]: E1106 00:23:19.591610 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.591918 kubelet[2798]: E1106 00:23:19.591868 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.591918 kubelet[2798]: W1106 00:23:19.591876 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.591918 kubelet[2798]: E1106 00:23:19.591884 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.592190 kubelet[2798]: E1106 00:23:19.592142 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.592190 kubelet[2798]: W1106 00:23:19.592150 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.592190 kubelet[2798]: E1106 00:23:19.592157 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.592452 kubelet[2798]: E1106 00:23:19.592406 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.592452 kubelet[2798]: W1106 00:23:19.592415 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.592452 kubelet[2798]: E1106 00:23:19.592422 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.592733 kubelet[2798]: E1106 00:23:19.592725 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.592843 kubelet[2798]: W1106 00:23:19.592791 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.592843 kubelet[2798]: E1106 00:23:19.592803 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.593189 kubelet[2798]: E1106 00:23:19.593111 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.593189 kubelet[2798]: W1106 00:23:19.593119 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.593189 kubelet[2798]: E1106 00:23:19.593134 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.593370 kubelet[2798]: E1106 00:23:19.593318 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.593370 kubelet[2798]: W1106 00:23:19.593325 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.593370 kubelet[2798]: E1106 00:23:19.593332 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.593630 kubelet[2798]: E1106 00:23:19.593590 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.593630 kubelet[2798]: W1106 00:23:19.593598 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.593630 kubelet[2798]: E1106 00:23:19.593605 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.593902 kubelet[2798]: E1106 00:23:19.593851 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.593902 kubelet[2798]: W1106 00:23:19.593859 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.593902 kubelet[2798]: E1106 00:23:19.593866 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.594210 kubelet[2798]: E1106 00:23:19.594171 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.594210 kubelet[2798]: W1106 00:23:19.594179 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.594210 kubelet[2798]: E1106 00:23:19.594187 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.594471 kubelet[2798]: E1106 00:23:19.594433 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.594471 kubelet[2798]: W1106 00:23:19.594440 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.594471 kubelet[2798]: E1106 00:23:19.594448 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.594746 kubelet[2798]: E1106 00:23:19.594696 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.594746 kubelet[2798]: W1106 00:23:19.594703 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.594746 kubelet[2798]: E1106 00:23:19.594711 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.594980 kubelet[2798]: E1106 00:23:19.594952 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.594980 kubelet[2798]: W1106 00:23:19.594960 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.594980 kubelet[2798]: E1106 00:23:19.594969 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.595304 kubelet[2798]: E1106 00:23:19.595296 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.595348 kubelet[2798]: W1106 00:23:19.595341 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.595400 kubelet[2798]: E1106 00:23:19.595393 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.595450 kubelet[2798]: I1106 00:23:19.595442 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f296fc03-b516-4c28-a887-9cf8255c6651-registration-dir\") pod \"csi-node-driver-dzjlj\" (UID: \"f296fc03-b516-4c28-a887-9cf8255c6651\") " pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:19.595681 kubelet[2798]: E1106 00:23:19.595673 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.595729 kubelet[2798]: W1106 00:23:19.595722 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.595771 kubelet[2798]: E1106 00:23:19.595764 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.595953 kubelet[2798]: E1106 00:23:19.595930 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.595953 kubelet[2798]: W1106 00:23:19.595938 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.595953 kubelet[2798]: E1106 00:23:19.595945 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.596229 kubelet[2798]: E1106 00:23:19.596205 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.596229 kubelet[2798]: W1106 00:23:19.596213 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.596229 kubelet[2798]: E1106 00:23:19.596220 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.596388 kubelet[2798]: I1106 00:23:19.596329 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqvl\" (UniqueName: \"kubernetes.io/projected/f296fc03-b516-4c28-a887-9cf8255c6651-kube-api-access-qdqvl\") pod \"csi-node-driver-dzjlj\" (UID: \"f296fc03-b516-4c28-a887-9cf8255c6651\") " pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:19.596576 kubelet[2798]: E1106 00:23:19.596541 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.596576 kubelet[2798]: W1106 00:23:19.596549 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.596576 kubelet[2798]: E1106 00:23:19.596557 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.596704 kubelet[2798]: I1106 00:23:19.596669 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f296fc03-b516-4c28-a887-9cf8255c6651-socket-dir\") pod \"csi-node-driver-dzjlj\" (UID: \"f296fc03-b516-4c28-a887-9cf8255c6651\") " pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:19.596917 kubelet[2798]: E1106 00:23:19.596892 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.596917 kubelet[2798]: W1106 00:23:19.596900 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.596917 kubelet[2798]: E1106 00:23:19.596909 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.597080 kubelet[2798]: I1106 00:23:19.597008 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f296fc03-b516-4c28-a887-9cf8255c6651-kubelet-dir\") pod \"csi-node-driver-dzjlj\" (UID: \"f296fc03-b516-4c28-a887-9cf8255c6651\") " pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:19.597261 kubelet[2798]: E1106 00:23:19.597234 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.597261 kubelet[2798]: W1106 00:23:19.597243 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.597261 kubelet[2798]: E1106 00:23:19.597252 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.597474 kubelet[2798]: I1106 00:23:19.597396 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f296fc03-b516-4c28-a887-9cf8255c6651-varrun\") pod \"csi-node-driver-dzjlj\" (UID: \"f296fc03-b516-4c28-a887-9cf8255c6651\") " pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:19.597582 kubelet[2798]: E1106 00:23:19.597556 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.597582 kubelet[2798]: W1106 00:23:19.597565 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.597582 kubelet[2798]: E1106 00:23:19.597573 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.597832 kubelet[2798]: E1106 00:23:19.597809 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.597832 kubelet[2798]: W1106 00:23:19.597817 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.597832 kubelet[2798]: E1106 00:23:19.597824 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.598116 kubelet[2798]: E1106 00:23:19.598092 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.598116 kubelet[2798]: W1106 00:23:19.598100 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.598116 kubelet[2798]: E1106 00:23:19.598108 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.598407 kubelet[2798]: E1106 00:23:19.598384 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.598407 kubelet[2798]: W1106 00:23:19.598392 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.598407 kubelet[2798]: E1106 00:23:19.598399 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.598635 kubelet[2798]: E1106 00:23:19.598612 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.598635 kubelet[2798]: W1106 00:23:19.598619 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.598635 kubelet[2798]: E1106 00:23:19.598626 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.598874 kubelet[2798]: E1106 00:23:19.598850 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.598874 kubelet[2798]: W1106 00:23:19.598858 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.598874 kubelet[2798]: E1106 00:23:19.598865 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.599146 kubelet[2798]: E1106 00:23:19.599123 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.599146 kubelet[2798]: W1106 00:23:19.599131 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.599146 kubelet[2798]: E1106 00:23:19.599138 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.599401 kubelet[2798]: E1106 00:23:19.599375 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.599401 kubelet[2798]: W1106 00:23:19.599382 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.599401 kubelet[2798]: E1106 00:23:19.599389 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.670203 containerd[1607]: time="2025-11-06T00:23:19.669888285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjvm9,Uid:d6010ff1-00aa-4b99-90d2-17d88a8f628b,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:19.694063 containerd[1607]: time="2025-11-06T00:23:19.693537178Z" level=info msg="connecting to shim 72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850" address="unix:///run/containerd/s/fb52c49adf209eacfe4a539ddaad476355b9d4ab9cf11c83f43a9638faf987ed" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:19.698143 kubelet[2798]: E1106 00:23:19.698111 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.698143 kubelet[2798]: W1106 00:23:19.698137 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.698258 kubelet[2798]: E1106 00:23:19.698155 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.698437 kubelet[2798]: E1106 00:23:19.698413 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.698437 kubelet[2798]: W1106 00:23:19.698431 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.698437 kubelet[2798]: E1106 00:23:19.698439 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.698647 kubelet[2798]: E1106 00:23:19.698625 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.698647 kubelet[2798]: W1106 00:23:19.698639 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.698647 kubelet[2798]: E1106 00:23:19.698646 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.698827 kubelet[2798]: E1106 00:23:19.698806 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.698827 kubelet[2798]: W1106 00:23:19.698819 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.698827 kubelet[2798]: E1106 00:23:19.698826 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.699984 kubelet[2798]: E1106 00:23:19.699954 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.699984 kubelet[2798]: W1106 00:23:19.699971 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.699984 kubelet[2798]: E1106 00:23:19.699979 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.700184 kubelet[2798]: E1106 00:23:19.700158 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.700184 kubelet[2798]: W1106 00:23:19.700165 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.700184 kubelet[2798]: E1106 00:23:19.700173 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.700536 kubelet[2798]: E1106 00:23:19.700278 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.700536 kubelet[2798]: W1106 00:23:19.700290 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.700536 kubelet[2798]: E1106 00:23:19.700297 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.700536 kubelet[2798]: E1106 00:23:19.700476 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.700536 kubelet[2798]: W1106 00:23:19.700483 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.700536 kubelet[2798]: E1106 00:23:19.700490 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.700736 kubelet[2798]: E1106 00:23:19.700629 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.700736 kubelet[2798]: W1106 00:23:19.700645 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.700736 kubelet[2798]: E1106 00:23:19.700652 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.701053 kubelet[2798]: E1106 00:23:19.700821 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.701053 kubelet[2798]: W1106 00:23:19.700833 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.701053 kubelet[2798]: E1106 00:23:19.700865 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.701053 kubelet[2798]: E1106 00:23:19.700999 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.701053 kubelet[2798]: W1106 00:23:19.701014 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.701053 kubelet[2798]: E1106 00:23:19.701021 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.702560 kubelet[2798]: E1106 00:23:19.702532 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.702560 kubelet[2798]: W1106 00:23:19.702543 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.702653 kubelet[2798]: E1106 00:23:19.702569 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703081 kubelet[2798]: E1106 00:23:19.702707 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703081 kubelet[2798]: W1106 00:23:19.702732 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703081 kubelet[2798]: E1106 00:23:19.702740 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703081 kubelet[2798]: E1106 00:23:19.702856 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703081 kubelet[2798]: W1106 00:23:19.702862 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703081 kubelet[2798]: E1106 00:23:19.702868 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703081 kubelet[2798]: E1106 00:23:19.702981 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703081 kubelet[2798]: W1106 00:23:19.702987 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703081 kubelet[2798]: E1106 00:23:19.702994 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703379 kubelet[2798]: E1106 00:23:19.703120 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703379 kubelet[2798]: W1106 00:23:19.703126 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703379 kubelet[2798]: E1106 00:23:19.703133 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703379 kubelet[2798]: E1106 00:23:19.703225 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703379 kubelet[2798]: W1106 00:23:19.703230 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703379 kubelet[2798]: E1106 00:23:19.703236 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703675 kubelet[2798]: E1106 00:23:19.703411 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703675 kubelet[2798]: W1106 00:23:19.703419 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703675 kubelet[2798]: E1106 00:23:19.703426 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.703675 kubelet[2798]: E1106 00:23:19.703543 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.703675 kubelet[2798]: W1106 00:23:19.703548 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.703675 kubelet[2798]: E1106 00:23:19.703555 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.704018 kubelet[2798]: E1106 00:23:19.703694 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.704018 kubelet[2798]: W1106 00:23:19.703701 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.704018 kubelet[2798]: E1106 00:23:19.703707 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.704018 kubelet[2798]: E1106 00:23:19.703820 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.704018 kubelet[2798]: W1106 00:23:19.703826 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.704018 kubelet[2798]: E1106 00:23:19.703832 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.704018 kubelet[2798]: E1106 00:23:19.703942 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.704018 kubelet[2798]: W1106 00:23:19.703948 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.704018 kubelet[2798]: E1106 00:23:19.703954 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.706204 kubelet[2798]: E1106 00:23:19.704977 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.706204 kubelet[2798]: W1106 00:23:19.704985 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.706204 kubelet[2798]: E1106 00:23:19.704994 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.706204 kubelet[2798]: E1106 00:23:19.705166 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.706204 kubelet[2798]: W1106 00:23:19.705172 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.706204 kubelet[2798]: E1106 00:23:19.705179 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.706204 kubelet[2798]: E1106 00:23:19.705524 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.706204 kubelet[2798]: W1106 00:23:19.705530 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.706204 kubelet[2798]: E1106 00:23:19.705537 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.714503 kubelet[2798]: E1106 00:23:19.714476 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:19.714503 kubelet[2798]: W1106 00:23:19.714496 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:19.714503 kubelet[2798]: E1106 00:23:19.714508 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:19.731203 systemd[1]: Started cri-containerd-72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850.scope - libcontainer container 72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850. Nov 6 00:23:19.762485 containerd[1607]: time="2025-11-06T00:23:19.762424016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hjvm9,Uid:d6010ff1-00aa-4b99-90d2-17d88a8f628b,Namespace:calico-system,Attempt:0,} returns sandbox id \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\"" Nov 6 00:23:21.157002 kubelet[2798]: E1106 00:23:21.156633 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:21.316648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141882422.mount: Deactivated successfully. Nov 6 00:23:21.768443 containerd[1607]: time="2025-11-06T00:23:21.768091759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:21.769541 containerd[1607]: time="2025-11-06T00:23:21.769504511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 6 00:23:21.771054 containerd[1607]: time="2025-11-06T00:23:21.770337895Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:21.772289 containerd[1607]: time="2025-11-06T00:23:21.772262598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:21.772815 containerd[1607]: time="2025-11-06T00:23:21.772788826Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.207843583s" Nov 6 00:23:21.772927 containerd[1607]: time="2025-11-06T00:23:21.772916486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 00:23:21.775214 containerd[1607]: time="2025-11-06T00:23:21.774272150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 00:23:21.786569 containerd[1607]: time="2025-11-06T00:23:21.786540878Z" level=info msg="CreateContainer within sandbox \"39e0ea51276a900a85347abf2ec01ab1e271c6a5bed5e67f6820b8d81898f311\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 00:23:21.795136 containerd[1607]: time="2025-11-06T00:23:21.795114370Z" level=info msg="Container 5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:21.801854 containerd[1607]: time="2025-11-06T00:23:21.801818493Z" level=info msg="CreateContainer within sandbox \"39e0ea51276a900a85347abf2ec01ab1e271c6a5bed5e67f6820b8d81898f311\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2\"" Nov 6 00:23:21.802353 containerd[1607]: time="2025-11-06T00:23:21.802323641Z" level=info msg="StartContainer for \"5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2\"" Nov 6 00:23:21.808164 containerd[1607]: time="2025-11-06T00:23:21.808017858Z" level=info msg="connecting to shim 5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2" address="unix:///run/containerd/s/d40583c4262d3296d42c8cd6cee839c8404fe27c111005896fced3a473c92428" protocol=ttrpc version=3 Nov 6 00:23:21.826213 systemd[1]: Started cri-containerd-5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2.scope - libcontainer container 5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2. Nov 6 00:23:21.882640 containerd[1607]: time="2025-11-06T00:23:21.882556183Z" level=info msg="StartContainer for \"5fe9970cc584cf96163886dfc7057f798f26d0b96eede4bc6a1baa2e26badbd2\" returns successfully" Nov 6 00:23:22.360518 kubelet[2798]: I1106 00:23:22.359719 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5994b5c59-6s7r6" podStartSLOduration=1.14749113 podStartE2EDuration="3.356676702s" podCreationTimestamp="2025-11-06 00:23:19 +0000 UTC" firstStartedPulling="2025-11-06 00:23:19.564525005 +0000 UTC m=+22.536358846" lastFinishedPulling="2025-11-06 00:23:21.773710576 +0000 UTC m=+24.745544418" observedRunningTime="2025-11-06 00:23:22.353977616 +0000 UTC m=+25.325811488" watchObservedRunningTime="2025-11-06 00:23:22.356676702 +0000 UTC m=+25.328510574" Nov 6 00:23:22.414987 kubelet[2798]: E1106 00:23:22.414925 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.415475 kubelet[2798]: W1106 00:23:22.415099 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.417802 kubelet[2798]: E1106 00:23:22.417510 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.418303 kubelet[2798]: E1106 00:23:22.418271 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.418303 kubelet[2798]: W1106 00:23:22.418297 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.419022 kubelet[2798]: E1106 00:23:22.418433 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.419022 kubelet[2798]: E1106 00:23:22.418804 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.419022 kubelet[2798]: W1106 00:23:22.418820 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.419022 kubelet[2798]: E1106 00:23:22.418836 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.419593 kubelet[2798]: E1106 00:23:22.419237 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.419593 kubelet[2798]: W1106 00:23:22.419252 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.419593 kubelet[2798]: E1106 00:23:22.419268 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.419726 kubelet[2798]: E1106 00:23:22.419619 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.419726 kubelet[2798]: W1106 00:23:22.419673 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.419726 kubelet[2798]: E1106 00:23:22.419692 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.420278 kubelet[2798]: E1106 00:23:22.419961 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.420278 kubelet[2798]: W1106 00:23:22.420022 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.420278 kubelet[2798]: E1106 00:23:22.420100 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.420493 kubelet[2798]: E1106 00:23:22.420415 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.420493 kubelet[2798]: W1106 00:23:22.420429 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.420493 kubelet[2798]: E1106 00:23:22.420445 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.420920 kubelet[2798]: E1106 00:23:22.420766 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.420920 kubelet[2798]: W1106 00:23:22.420782 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.420920 kubelet[2798]: E1106 00:23:22.420797 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.421785 kubelet[2798]: E1106 00:23:22.421178 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.421785 kubelet[2798]: W1106 00:23:22.421193 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.421785 kubelet[2798]: E1106 00:23:22.421209 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.421954 kubelet[2798]: E1106 00:23:22.421748 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.421954 kubelet[2798]: W1106 00:23:22.421807 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.421954 kubelet[2798]: E1106 00:23:22.421837 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.423171 kubelet[2798]: E1106 00:23:22.422237 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.423171 kubelet[2798]: W1106 00:23:22.422251 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.423171 kubelet[2798]: E1106 00:23:22.422267 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.423171 kubelet[2798]: E1106 00:23:22.422642 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.423171 kubelet[2798]: W1106 00:23:22.422655 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.423171 kubelet[2798]: E1106 00:23:22.422675 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.423171 kubelet[2798]: E1106 00:23:22.423000 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.423171 kubelet[2798]: W1106 00:23:22.423013 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.423171 kubelet[2798]: E1106 00:23:22.423097 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.423593 kubelet[2798]: E1106 00:23:22.423542 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.423593 kubelet[2798]: W1106 00:23:22.423574 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.423679 kubelet[2798]: E1106 00:23:22.423624 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.424099 kubelet[2798]: E1106 00:23:22.423954 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.424099 kubelet[2798]: W1106 00:23:22.424007 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.424099 kubelet[2798]: E1106 00:23:22.424085 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.424959 kubelet[2798]: E1106 00:23:22.424902 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.424959 kubelet[2798]: W1106 00:23:22.424929 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.424959 kubelet[2798]: E1106 00:23:22.424945 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.426899 kubelet[2798]: E1106 00:23:22.426795 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.426899 kubelet[2798]: W1106 00:23:22.426837 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.426899 kubelet[2798]: E1106 00:23:22.426855 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.428052 kubelet[2798]: E1106 00:23:22.427178 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.428052 kubelet[2798]: W1106 00:23:22.427221 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.428052 kubelet[2798]: E1106 00:23:22.427238 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.428052 kubelet[2798]: E1106 00:23:22.427517 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.428052 kubelet[2798]: W1106 00:23:22.427530 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.428052 kubelet[2798]: E1106 00:23:22.427546 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.428052 kubelet[2798]: E1106 00:23:22.427779 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.428052 kubelet[2798]: W1106 00:23:22.427792 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.428052 kubelet[2798]: E1106 00:23:22.427806 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.429316 kubelet[2798]: E1106 00:23:22.428725 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.429316 kubelet[2798]: W1106 00:23:22.429069 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.429316 kubelet[2798]: E1106 00:23:22.429135 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.429932 kubelet[2798]: E1106 00:23:22.429487 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.429932 kubelet[2798]: W1106 00:23:22.429501 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.429932 kubelet[2798]: E1106 00:23:22.429517 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.430902 kubelet[2798]: E1106 00:23:22.430882 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.431061 kubelet[2798]: W1106 00:23:22.431009 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.431242 kubelet[2798]: E1106 00:23:22.431148 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.431702 kubelet[2798]: E1106 00:23:22.431683 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.431852 kubelet[2798]: W1106 00:23:22.431795 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.431852 kubelet[2798]: E1106 00:23:22.431817 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.432443 kubelet[2798]: E1106 00:23:22.432383 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.432650 kubelet[2798]: W1106 00:23:22.432544 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.432650 kubelet[2798]: E1106 00:23:22.432566 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.433305 kubelet[2798]: E1106 00:23:22.433288 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.433537 kubelet[2798]: W1106 00:23:22.433415 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.433537 kubelet[2798]: E1106 00:23:22.433438 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.434511 kubelet[2798]: E1106 00:23:22.434284 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.434511 kubelet[2798]: W1106 00:23:22.434434 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.434511 kubelet[2798]: E1106 00:23:22.434460 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.435338 kubelet[2798]: E1106 00:23:22.435307 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.435575 kubelet[2798]: W1106 00:23:22.435442 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.435575 kubelet[2798]: E1106 00:23:22.435462 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.436095 kubelet[2798]: E1106 00:23:22.436077 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.436309 kubelet[2798]: W1106 00:23:22.436190 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.436309 kubelet[2798]: E1106 00:23:22.436212 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.436810 kubelet[2798]: E1106 00:23:22.436752 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.436810 kubelet[2798]: W1106 00:23:22.436771 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.436810 kubelet[2798]: E1106 00:23:22.436791 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.437554 kubelet[2798]: E1106 00:23:22.437492 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.437554 kubelet[2798]: W1106 00:23:22.437516 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.437554 kubelet[2798]: E1106 00:23:22.437532 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.438453 kubelet[2798]: E1106 00:23:22.438415 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.438453 kubelet[2798]: W1106 00:23:22.438437 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.438453 kubelet[2798]: E1106 00:23:22.438453 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:22.438816 kubelet[2798]: E1106 00:23:22.438789 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:22.438816 kubelet[2798]: W1106 00:23:22.438810 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:22.438929 kubelet[2798]: E1106 00:23:22.438825 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.154997 kubelet[2798]: E1106 00:23:23.154432 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:23.340523 kubelet[2798]: I1106 00:23:23.340483 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:23:23.432900 kubelet[2798]: E1106 00:23:23.432349 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.432900 kubelet[2798]: W1106 00:23:23.432428 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.432900 kubelet[2798]: E1106 00:23:23.432450 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.432900 kubelet[2798]: E1106 00:23:23.432629 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.432900 kubelet[2798]: W1106 00:23:23.432636 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.432900 kubelet[2798]: E1106 00:23:23.432644 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.432900 kubelet[2798]: E1106 00:23:23.432859 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.432900 kubelet[2798]: W1106 00:23:23.432869 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.433619 kubelet[2798]: E1106 00:23:23.433198 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.433928 kubelet[2798]: E1106 00:23:23.433898 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.433928 kubelet[2798]: W1106 00:23:23.433926 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.433986 kubelet[2798]: E1106 00:23:23.433949 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.434260 kubelet[2798]: E1106 00:23:23.434225 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.434260 kubelet[2798]: W1106 00:23:23.434238 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.434260 kubelet[2798]: E1106 00:23:23.434248 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.434713 kubelet[2798]: E1106 00:23:23.434423 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.434713 kubelet[2798]: W1106 00:23:23.434431 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.434713 kubelet[2798]: E1106 00:23:23.434438 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.434713 kubelet[2798]: E1106 00:23:23.434531 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.434713 kubelet[2798]: W1106 00:23:23.434536 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.434713 kubelet[2798]: E1106 00:23:23.434542 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.434713 kubelet[2798]: E1106 00:23:23.434634 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.434713 kubelet[2798]: W1106 00:23:23.434641 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.434713 kubelet[2798]: E1106 00:23:23.434647 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.434740 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.435449 kubelet[2798]: W1106 00:23:23.434746 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.434752 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.434825 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.435449 kubelet[2798]: W1106 00:23:23.434829 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.434835 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.434934 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.435449 kubelet[2798]: W1106 00:23:23.434939 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.434945 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.435449 kubelet[2798]: E1106 00:23:23.435074 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436003 kubelet[2798]: W1106 00:23:23.435080 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436003 kubelet[2798]: E1106 00:23:23.435086 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436003 kubelet[2798]: E1106 00:23:23.435244 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436003 kubelet[2798]: W1106 00:23:23.435250 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436003 kubelet[2798]: E1106 00:23:23.435257 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436003 kubelet[2798]: E1106 00:23:23.435415 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436003 kubelet[2798]: W1106 00:23:23.435422 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436003 kubelet[2798]: E1106 00:23:23.435429 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436003 kubelet[2798]: E1106 00:23:23.435665 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436003 kubelet[2798]: W1106 00:23:23.435673 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436492 kubelet[2798]: E1106 00:23:23.435681 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436492 kubelet[2798]: E1106 00:23:23.435981 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436492 kubelet[2798]: W1106 00:23:23.435989 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436492 kubelet[2798]: E1106 00:23:23.435998 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436492 kubelet[2798]: E1106 00:23:23.436302 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436492 kubelet[2798]: W1106 00:23:23.436312 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436492 kubelet[2798]: E1106 00:23:23.436320 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436831 kubelet[2798]: E1106 00:23:23.436621 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436831 kubelet[2798]: W1106 00:23:23.436629 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436831 kubelet[2798]: E1106 00:23:23.436637 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.436831 kubelet[2798]: E1106 00:23:23.436784 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.436831 kubelet[2798]: W1106 00:23:23.436791 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.436831 kubelet[2798]: E1106 00:23:23.436797 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.437135 kubelet[2798]: E1106 00:23:23.437020 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.437135 kubelet[2798]: W1106 00:23:23.437047 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.437135 kubelet[2798]: E1106 00:23:23.437057 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.437686 kubelet[2798]: E1106 00:23:23.437668 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.437686 kubelet[2798]: W1106 00:23:23.437681 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.437686 kubelet[2798]: E1106 00:23:23.437689 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.437928 kubelet[2798]: E1106 00:23:23.437883 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.437928 kubelet[2798]: W1106 00:23:23.437897 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.438022 kubelet[2798]: E1106 00:23:23.437905 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.438342 kubelet[2798]: E1106 00:23:23.438218 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.438342 kubelet[2798]: W1106 00:23:23.438237 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.438342 kubelet[2798]: E1106 00:23:23.438251 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.438577 kubelet[2798]: E1106 00:23:23.438528 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.438577 kubelet[2798]: W1106 00:23:23.438538 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.438577 kubelet[2798]: E1106 00:23:23.438549 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.438741 containerd[1607]: time="2025-11-06T00:23:23.438694244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:23.439434 kubelet[2798]: E1106 00:23:23.439404 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.439434 kubelet[2798]: W1106 00:23:23.439424 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.439605 kubelet[2798]: E1106 00:23:23.439437 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.439656 kubelet[2798]: E1106 00:23:23.439644 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.439694 kubelet[2798]: W1106 00:23:23.439655 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.439694 kubelet[2798]: E1106 00:23:23.439667 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.439893 kubelet[2798]: E1106 00:23:23.439864 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.439893 kubelet[2798]: W1106 00:23:23.439874 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.439893 kubelet[2798]: E1106 00:23:23.439887 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.440659 kubelet[2798]: E1106 00:23:23.440102 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.440659 kubelet[2798]: W1106 00:23:23.440113 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.440659 kubelet[2798]: E1106 00:23:23.440124 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.440659 kubelet[2798]: E1106 00:23:23.440338 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.440659 kubelet[2798]: W1106 00:23:23.440346 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.440659 kubelet[2798]: E1106 00:23:23.440356 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.440659 kubelet[2798]: E1106 00:23:23.440562 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.440659 kubelet[2798]: W1106 00:23:23.440572 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.440659 kubelet[2798]: E1106 00:23:23.440580 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.440848 containerd[1607]: time="2025-11-06T00:23:23.440189340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 6 00:23:23.441010 kubelet[2798]: E1106 00:23:23.440980 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.441010 kubelet[2798]: W1106 00:23:23.440991 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.441010 kubelet[2798]: E1106 00:23:23.440999 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.441466 kubelet[2798]: E1106 00:23:23.441441 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.441466 kubelet[2798]: W1106 00:23:23.441454 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.441466 kubelet[2798]: E1106 00:23:23.441463 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.442017 kubelet[2798]: E1106 00:23:23.441842 2798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 00:23:23.442017 kubelet[2798]: W1106 00:23:23.441853 2798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 00:23:23.442017 kubelet[2798]: E1106 00:23:23.441861 2798 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 00:23:23.442217 containerd[1607]: time="2025-11-06T00:23:23.441937421Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:23.443834 containerd[1607]: time="2025-11-06T00:23:23.443808803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:23.444302 containerd[1607]: time="2025-11-06T00:23:23.444146486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.669062222s" Nov 6 00:23:23.444302 containerd[1607]: time="2025-11-06T00:23:23.444173567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 00:23:23.448572 containerd[1607]: time="2025-11-06T00:23:23.448511860Z" level=info msg="CreateContainer within sandbox \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 00:23:23.490310 containerd[1607]: time="2025-11-06T00:23:23.490250434Z" level=info msg="Container d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:23.503693 containerd[1607]: time="2025-11-06T00:23:23.503632409Z" level=info msg="CreateContainer within sandbox \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\"" Nov 6 00:23:23.504432 containerd[1607]: time="2025-11-06T00:23:23.504353533Z" level=info msg="StartContainer for \"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\"" Nov 6 00:23:23.508268 containerd[1607]: time="2025-11-06T00:23:23.508167410Z" level=info msg="connecting to shim d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118" address="unix:///run/containerd/s/fb52c49adf209eacfe4a539ddaad476355b9d4ab9cf11c83f43a9638faf987ed" protocol=ttrpc version=3 Nov 6 00:23:23.548235 systemd[1]: Started cri-containerd-d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118.scope - libcontainer container d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118. Nov 6 00:23:23.620932 containerd[1607]: time="2025-11-06T00:23:23.620870059Z" level=info msg="StartContainer for \"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\" returns successfully" Nov 6 00:23:23.625965 systemd[1]: cri-containerd-d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118.scope: Deactivated successfully. Nov 6 00:23:23.642935 containerd[1607]: time="2025-11-06T00:23:23.642888252Z" level=info msg="received exit event container_id:\"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\" id:\"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\" pid:3523 exited_at:{seconds:1762388603 nanos:631560622}" Nov 6 00:23:23.666078 containerd[1607]: time="2025-11-06T00:23:23.665767992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\" id:\"d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118\" pid:3523 exited_at:{seconds:1762388603 nanos:631560622}" Nov 6 00:23:23.687551 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9c16262fa2c745b255db64335785cfb711b482bbfc423ca9884cb76421bb118-rootfs.mount: Deactivated successfully. Nov 6 00:23:24.349585 containerd[1607]: time="2025-11-06T00:23:24.349413641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 00:23:25.155066 kubelet[2798]: E1106 00:23:25.154170 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:26.935057 containerd[1607]: time="2025-11-06T00:23:26.934991652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:26.936206 containerd[1607]: time="2025-11-06T00:23:26.936112025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 6 00:23:26.937226 containerd[1607]: time="2025-11-06T00:23:26.937189857Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:26.939903 containerd[1607]: time="2025-11-06T00:23:26.939257297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:26.939903 containerd[1607]: time="2025-11-06T00:23:26.939792572Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.589834511s" Nov 6 00:23:26.939903 containerd[1607]: time="2025-11-06T00:23:26.939827207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 00:23:26.944314 containerd[1607]: time="2025-11-06T00:23:26.944267850Z" level=info msg="CreateContainer within sandbox \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 00:23:26.954936 containerd[1607]: time="2025-11-06T00:23:26.954748627Z" level=info msg="Container bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:26.968023 containerd[1607]: time="2025-11-06T00:23:26.967965590Z" level=info msg="CreateContainer within sandbox \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\"" Nov 6 00:23:26.970172 containerd[1607]: time="2025-11-06T00:23:26.970111468Z" level=info msg="StartContainer for \"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\"" Nov 6 00:23:26.972002 containerd[1607]: time="2025-11-06T00:23:26.971965316Z" level=info msg="connecting to shim bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc" address="unix:///run/containerd/s/fb52c49adf209eacfe4a539ddaad476355b9d4ab9cf11c83f43a9638faf987ed" protocol=ttrpc version=3 Nov 6 00:23:26.996213 systemd[1]: Started cri-containerd-bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc.scope - libcontainer container bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc. Nov 6 00:23:27.039085 containerd[1607]: time="2025-11-06T00:23:27.038987205Z" level=info msg="StartContainer for \"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\" returns successfully" Nov 6 00:23:27.162688 kubelet[2798]: E1106 00:23:27.161691 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:27.584932 systemd[1]: cri-containerd-bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc.scope: Deactivated successfully. Nov 6 00:23:27.585915 systemd[1]: cri-containerd-bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc.scope: Consumed 437ms CPU time, 157.4M memory peak, 5.8M read from disk, 171.3M written to disk. Nov 6 00:23:27.638282 containerd[1607]: time="2025-11-06T00:23:27.636713586Z" level=info msg="received exit event container_id:\"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\" id:\"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\" pid:3580 exited_at:{seconds:1762388607 nanos:636216064}" Nov 6 00:23:27.638282 containerd[1607]: time="2025-11-06T00:23:27.636924522Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\" id:\"bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc\" pid:3580 exited_at:{seconds:1762388607 nanos:636216064}" Nov 6 00:23:27.662579 kubelet[2798]: I1106 00:23:27.662395 2798 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 00:23:27.701296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bba73e38da70f308aaddd482b7f77eee40d13ebe1d678b6916cb2d6512e7d9dc-rootfs.mount: Deactivated successfully. Nov 6 00:23:27.754401 systemd[1]: Created slice kubepods-burstable-podf5803a11_2eb6_4be9_9c20_ca22054363db.slice - libcontainer container kubepods-burstable-podf5803a11_2eb6_4be9_9c20_ca22054363db.slice. Nov 6 00:23:27.764507 systemd[1]: Created slice kubepods-burstable-podb5d1eeb4_2160_4f69_990a_83733d0cf15b.slice - libcontainer container kubepods-burstable-podb5d1eeb4_2160_4f69_990a_83733d0cf15b.slice. Nov 6 00:23:27.772551 systemd[1]: Created slice kubepods-besteffort-podcc4a9674_9eca_4968_950d_28ec9c7b89e9.slice - libcontainer container kubepods-besteffort-podcc4a9674_9eca_4968_950d_28ec9c7b89e9.slice. Nov 6 00:23:27.775820 kubelet[2798]: I1106 00:23:27.775648 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc4a9674-9eca-4968-950d-28ec9c7b89e9-tigera-ca-bundle\") pod \"calico-kube-controllers-7f86cdd547-wv29n\" (UID: \"cc4a9674-9eca-4968-950d-28ec9c7b89e9\") " pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" Nov 6 00:23:27.775956 kubelet[2798]: I1106 00:23:27.775945 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrpz4\" (UniqueName: \"kubernetes.io/projected/cc4a9674-9eca-4968-950d-28ec9c7b89e9-kube-api-access-mrpz4\") pod \"calico-kube-controllers-7f86cdd547-wv29n\" (UID: \"cc4a9674-9eca-4968-950d-28ec9c7b89e9\") " pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" Nov 6 00:23:27.776129 kubelet[2798]: I1106 00:23:27.776050 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5803a11-2eb6-4be9-9c20-ca22054363db-config-volume\") pod \"coredns-674b8bbfcf-m4hmv\" (UID: \"f5803a11-2eb6-4be9-9c20-ca22054363db\") " pod="kube-system/coredns-674b8bbfcf-m4hmv" Nov 6 00:23:27.776129 kubelet[2798]: I1106 00:23:27.776071 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shk2h\" (UniqueName: \"kubernetes.io/projected/f5803a11-2eb6-4be9-9c20-ca22054363db-kube-api-access-shk2h\") pod \"coredns-674b8bbfcf-m4hmv\" (UID: \"f5803a11-2eb6-4be9-9c20-ca22054363db\") " pod="kube-system/coredns-674b8bbfcf-m4hmv" Nov 6 00:23:27.776129 kubelet[2798]: I1106 00:23:27.776087 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-ca-bundle\") pod \"whisker-5c6d5847cc-qs4hg\" (UID: \"e112540e-f2e0-476b-945f-67b06a61f6cd\") " pod="calico-system/whisker-5c6d5847cc-qs4hg" Nov 6 00:23:27.776129 kubelet[2798]: I1106 00:23:27.776101 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p26b8\" (UniqueName: \"kubernetes.io/projected/e112540e-f2e0-476b-945f-67b06a61f6cd-kube-api-access-p26b8\") pod \"whisker-5c6d5847cc-qs4hg\" (UID: \"e112540e-f2e0-476b-945f-67b06a61f6cd\") " pod="calico-system/whisker-5c6d5847cc-qs4hg" Nov 6 00:23:27.776953 kubelet[2798]: I1106 00:23:27.776248 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5d1eeb4-2160-4f69-990a-83733d0cf15b-config-volume\") pod \"coredns-674b8bbfcf-fzk74\" (UID: \"b5d1eeb4-2160-4f69-990a-83733d0cf15b\") " pod="kube-system/coredns-674b8bbfcf-fzk74" Nov 6 00:23:27.776953 kubelet[2798]: I1106 00:23:27.776277 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a-goldmane-key-pair\") pod \"goldmane-666569f655-j4x55\" (UID: \"7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a\") " pod="calico-system/goldmane-666569f655-j4x55" Nov 6 00:23:27.776953 kubelet[2798]: I1106 00:23:27.776299 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ttfc\" (UniqueName: \"kubernetes.io/projected/e926a82a-b4ef-430a-95dd-9253d2a0007c-kube-api-access-4ttfc\") pod \"calico-apiserver-5c874f48d-mrqff\" (UID: \"e926a82a-b4ef-430a-95dd-9253d2a0007c\") " pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" Nov 6 00:23:27.776953 kubelet[2798]: I1106 00:23:27.776314 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a68094c7-e135-44b6-9a5d-a63247f50c8f-calico-apiserver-certs\") pod \"calico-apiserver-5c874f48d-5fgqv\" (UID: \"a68094c7-e135-44b6-9a5d-a63247f50c8f\") " pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" Nov 6 00:23:27.776953 kubelet[2798]: I1106 00:23:27.776329 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a-goldmane-ca-bundle\") pod \"goldmane-666569f655-j4x55\" (UID: \"7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a\") " pod="calico-system/goldmane-666569f655-j4x55" Nov 6 00:23:27.777374 kubelet[2798]: I1106 00:23:27.776342 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e926a82a-b4ef-430a-95dd-9253d2a0007c-calico-apiserver-certs\") pod \"calico-apiserver-5c874f48d-mrqff\" (UID: \"e926a82a-b4ef-430a-95dd-9253d2a0007c\") " pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" Nov 6 00:23:27.777374 kubelet[2798]: I1106 00:23:27.776371 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mw4\" (UniqueName: \"kubernetes.io/projected/a68094c7-e135-44b6-9a5d-a63247f50c8f-kube-api-access-g6mw4\") pod \"calico-apiserver-5c874f48d-5fgqv\" (UID: \"a68094c7-e135-44b6-9a5d-a63247f50c8f\") " pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" Nov 6 00:23:27.777374 kubelet[2798]: I1106 00:23:27.776384 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-backend-key-pair\") pod \"whisker-5c6d5847cc-qs4hg\" (UID: \"e112540e-f2e0-476b-945f-67b06a61f6cd\") " pod="calico-system/whisker-5c6d5847cc-qs4hg" Nov 6 00:23:27.777374 kubelet[2798]: I1106 00:23:27.776449 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4q2k\" (UniqueName: \"kubernetes.io/projected/b5d1eeb4-2160-4f69-990a-83733d0cf15b-kube-api-access-r4q2k\") pod \"coredns-674b8bbfcf-fzk74\" (UID: \"b5d1eeb4-2160-4f69-990a-83733d0cf15b\") " pod="kube-system/coredns-674b8bbfcf-fzk74" Nov 6 00:23:27.777374 kubelet[2798]: I1106 00:23:27.776504 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a-config\") pod \"goldmane-666569f655-j4x55\" (UID: \"7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a\") " pod="calico-system/goldmane-666569f655-j4x55" Nov 6 00:23:27.777703 kubelet[2798]: I1106 00:23:27.776529 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sh8f\" (UniqueName: \"kubernetes.io/projected/7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a-kube-api-access-4sh8f\") pod \"goldmane-666569f655-j4x55\" (UID: \"7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a\") " pod="calico-system/goldmane-666569f655-j4x55" Nov 6 00:23:27.779883 systemd[1]: Created slice kubepods-besteffort-pode926a82a_b4ef_430a_95dd_9253d2a0007c.slice - libcontainer container kubepods-besteffort-pode926a82a_b4ef_430a_95dd_9253d2a0007c.slice. Nov 6 00:23:27.785944 systemd[1]: Created slice kubepods-besteffort-pod7a7f8ed4_e4ea_4ce8_94e8_d2e127cd989a.slice - libcontainer container kubepods-besteffort-pod7a7f8ed4_e4ea_4ce8_94e8_d2e127cd989a.slice. Nov 6 00:23:27.793125 systemd[1]: Created slice kubepods-besteffort-pode112540e_f2e0_476b_945f_67b06a61f6cd.slice - libcontainer container kubepods-besteffort-pode112540e_f2e0_476b_945f_67b06a61f6cd.slice. Nov 6 00:23:27.800918 systemd[1]: Created slice kubepods-besteffort-poda68094c7_e135_44b6_9a5d_a63247f50c8f.slice - libcontainer container kubepods-besteffort-poda68094c7_e135_44b6_9a5d_a63247f50c8f.slice. Nov 6 00:23:28.069545 containerd[1607]: time="2025-11-06T00:23:28.069463163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzk74,Uid:b5d1eeb4-2160-4f69-990a-83733d0cf15b,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:28.081203 containerd[1607]: time="2025-11-06T00:23:28.079106519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4hmv,Uid:f5803a11-2eb6-4be9-9c20-ca22054363db,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:28.091124 containerd[1607]: time="2025-11-06T00:23:28.089548904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-mrqff,Uid:e926a82a-b4ef-430a-95dd-9253d2a0007c,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:28.106281 containerd[1607]: time="2025-11-06T00:23:28.105761255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-5fgqv,Uid:a68094c7-e135-44b6-9a5d-a63247f50c8f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:28.132728 containerd[1607]: time="2025-11-06T00:23:28.132420770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j4x55,Uid:7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:28.144784 containerd[1607]: time="2025-11-06T00:23:28.144756398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f86cdd547-wv29n,Uid:cc4a9674-9eca-4968-950d-28ec9c7b89e9,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:28.152543 containerd[1607]: time="2025-11-06T00:23:28.152470102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6d5847cc-qs4hg,Uid:e112540e-f2e0-476b-945f-67b06a61f6cd,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:28.398531 containerd[1607]: time="2025-11-06T00:23:28.398271829Z" level=error msg="Failed to destroy network for sandbox \"23f9c18096a25726cdea3a8c69267e4dcf515c6d3be1e0562e678ef3c38ac1e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.401392 containerd[1607]: time="2025-11-06T00:23:28.401348642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 00:23:28.402042 containerd[1607]: time="2025-11-06T00:23:28.401783107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f86cdd547-wv29n,Uid:cc4a9674-9eca-4968-950d-28ec9c7b89e9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9c18096a25726cdea3a8c69267e4dcf515c6d3be1e0562e678ef3c38ac1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.405920 kubelet[2798]: E1106 00:23:28.405886 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9c18096a25726cdea3a8c69267e4dcf515c6d3be1e0562e678ef3c38ac1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.406339 kubelet[2798]: E1106 00:23:28.406085 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9c18096a25726cdea3a8c69267e4dcf515c6d3be1e0562e678ef3c38ac1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" Nov 6 00:23:28.406339 kubelet[2798]: E1106 00:23:28.406105 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f9c18096a25726cdea3a8c69267e4dcf515c6d3be1e0562e678ef3c38ac1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" Nov 6 00:23:28.406796 kubelet[2798]: E1106 00:23:28.406439 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23f9c18096a25726cdea3a8c69267e4dcf515c6d3be1e0562e678ef3c38ac1e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:23:28.409124 containerd[1607]: time="2025-11-06T00:23:28.408696511Z" level=error msg="Failed to destroy network for sandbox \"b07c6d4d103c9adb7ba822a2289653819ae4a282e5facf1cd61703e3ceebe128\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.412122 containerd[1607]: time="2025-11-06T00:23:28.412089957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-mrqff,Uid:e926a82a-b4ef-430a-95dd-9253d2a0007c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07c6d4d103c9adb7ba822a2289653819ae4a282e5facf1cd61703e3ceebe128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.424064 containerd[1607]: time="2025-11-06T00:23:28.423955363Z" level=error msg="Failed to destroy network for sandbox \"740391990eb1eab66b12f4d3b1417213a2e669b8780459cdf9c74b8624d7cc88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.424343 kubelet[2798]: E1106 00:23:28.424302 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07c6d4d103c9adb7ba822a2289653819ae4a282e5facf1cd61703e3ceebe128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.424512 kubelet[2798]: E1106 00:23:28.424462 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07c6d4d103c9adb7ba822a2289653819ae4a282e5facf1cd61703e3ceebe128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" Nov 6 00:23:28.424754 kubelet[2798]: E1106 00:23:28.424710 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b07c6d4d103c9adb7ba822a2289653819ae4a282e5facf1cd61703e3ceebe128\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" Nov 6 00:23:28.425088 kubelet[2798]: E1106 00:23:28.425052 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b07c6d4d103c9adb7ba822a2289653819ae4a282e5facf1cd61703e3ceebe128\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:23:28.426398 containerd[1607]: time="2025-11-06T00:23:28.425715105Z" level=error msg="Failed to destroy network for sandbox \"1b8831da861d6382b8cd9a1618220bcaba906d71359488c647629dd4cc58b75a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.426663 containerd[1607]: time="2025-11-06T00:23:28.426301034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4hmv,Uid:f5803a11-2eb6-4be9-9c20-ca22054363db,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"740391990eb1eab66b12f4d3b1417213a2e669b8780459cdf9c74b8624d7cc88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.427082 kubelet[2798]: E1106 00:23:28.426822 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740391990eb1eab66b12f4d3b1417213a2e669b8780459cdf9c74b8624d7cc88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.427082 kubelet[2798]: E1106 00:23:28.426843 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740391990eb1eab66b12f4d3b1417213a2e669b8780459cdf9c74b8624d7cc88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m4hmv" Nov 6 00:23:28.427082 kubelet[2798]: E1106 00:23:28.426871 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"740391990eb1eab66b12f4d3b1417213a2e669b8780459cdf9c74b8624d7cc88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-m4hmv" Nov 6 00:23:28.427182 kubelet[2798]: E1106 00:23:28.426905 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-m4hmv_kube-system(f5803a11-2eb6-4be9-9c20-ca22054363db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-m4hmv_kube-system(f5803a11-2eb6-4be9-9c20-ca22054363db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"740391990eb1eab66b12f4d3b1417213a2e669b8780459cdf9c74b8624d7cc88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-m4hmv" podUID="f5803a11-2eb6-4be9-9c20-ca22054363db" Nov 6 00:23:28.427223 containerd[1607]: time="2025-11-06T00:23:28.427110643Z" level=error msg="Failed to destroy network for sandbox \"8a8b394d45a41aa7b9463959d9d42d939ed437656b0b94b9806ab9154db761ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.429654 containerd[1607]: time="2025-11-06T00:23:28.429615543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzk74,Uid:b5d1eeb4-2160-4f69-990a-83733d0cf15b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8831da861d6382b8cd9a1618220bcaba906d71359488c647629dd4cc58b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.430171 kubelet[2798]: E1106 00:23:28.430153 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8831da861d6382b8cd9a1618220bcaba906d71359488c647629dd4cc58b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.430276 kubelet[2798]: E1106 00:23:28.430213 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8831da861d6382b8cd9a1618220bcaba906d71359488c647629dd4cc58b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fzk74" Nov 6 00:23:28.430276 kubelet[2798]: E1106 00:23:28.430227 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b8831da861d6382b8cd9a1618220bcaba906d71359488c647629dd4cc58b75a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fzk74" Nov 6 00:23:28.430398 kubelet[2798]: E1106 00:23:28.430348 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fzk74_kube-system(b5d1eeb4-2160-4f69-990a-83733d0cf15b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fzk74_kube-system(b5d1eeb4-2160-4f69-990a-83733d0cf15b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b8831da861d6382b8cd9a1618220bcaba906d71359488c647629dd4cc58b75a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fzk74" podUID="b5d1eeb4-2160-4f69-990a-83733d0cf15b" Nov 6 00:23:28.431307 containerd[1607]: time="2025-11-06T00:23:28.431276360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-5fgqv,Uid:a68094c7-e135-44b6-9a5d-a63247f50c8f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8b394d45a41aa7b9463959d9d42d939ed437656b0b94b9806ab9154db761ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.431495 kubelet[2798]: E1106 00:23:28.431451 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8b394d45a41aa7b9463959d9d42d939ed437656b0b94b9806ab9154db761ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.431495 kubelet[2798]: E1106 00:23:28.431474 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8b394d45a41aa7b9463959d9d42d939ed437656b0b94b9806ab9154db761ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" Nov 6 00:23:28.431629 kubelet[2798]: E1106 00:23:28.431567 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a8b394d45a41aa7b9463959d9d42d939ed437656b0b94b9806ab9154db761ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" Nov 6 00:23:28.431629 kubelet[2798]: E1106 00:23:28.431610 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a8b394d45a41aa7b9463959d9d42d939ed437656b0b94b9806ab9154db761ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:23:28.432200 containerd[1607]: time="2025-11-06T00:23:28.432174195Z" level=error msg="Failed to destroy network for sandbox \"420db0ddb8d21f02e3324b99aafbfc4d9d4911ebcf008453439ce3e917888dcc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.436377 containerd[1607]: time="2025-11-06T00:23:28.436314924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6d5847cc-qs4hg,Uid:e112540e-f2e0-476b-945f-67b06a61f6cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"420db0ddb8d21f02e3324b99aafbfc4d9d4911ebcf008453439ce3e917888dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.436692 kubelet[2798]: E1106 00:23:28.436659 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420db0ddb8d21f02e3324b99aafbfc4d9d4911ebcf008453439ce3e917888dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.436776 kubelet[2798]: E1106 00:23:28.436708 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420db0ddb8d21f02e3324b99aafbfc4d9d4911ebcf008453439ce3e917888dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c6d5847cc-qs4hg" Nov 6 00:23:28.436776 kubelet[2798]: E1106 00:23:28.436729 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"420db0ddb8d21f02e3324b99aafbfc4d9d4911ebcf008453439ce3e917888dcc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c6d5847cc-qs4hg" Nov 6 00:23:28.436921 kubelet[2798]: E1106 00:23:28.436768 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c6d5847cc-qs4hg_calico-system(e112540e-f2e0-476b-945f-67b06a61f6cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c6d5847cc-qs4hg_calico-system(e112540e-f2e0-476b-945f-67b06a61f6cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"420db0ddb8d21f02e3324b99aafbfc4d9d4911ebcf008453439ce3e917888dcc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c6d5847cc-qs4hg" podUID="e112540e-f2e0-476b-945f-67b06a61f6cd" Nov 6 00:23:28.438873 containerd[1607]: time="2025-11-06T00:23:28.438829282Z" level=error msg="Failed to destroy network for sandbox \"97055c9bbdd2a363cce0087cb75e32d8265ba368683113dfa8729feec7d62c2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.440155 containerd[1607]: time="2025-11-06T00:23:28.440082905Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j4x55,Uid:7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97055c9bbdd2a363cce0087cb75e32d8265ba368683113dfa8729feec7d62c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.440732 kubelet[2798]: E1106 00:23:28.440317 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97055c9bbdd2a363cce0087cb75e32d8265ba368683113dfa8729feec7d62c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:28.440732 kubelet[2798]: E1106 00:23:28.440371 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97055c9bbdd2a363cce0087cb75e32d8265ba368683113dfa8729feec7d62c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j4x55" Nov 6 00:23:28.440732 kubelet[2798]: E1106 00:23:28.440385 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97055c9bbdd2a363cce0087cb75e32d8265ba368683113dfa8729feec7d62c2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-j4x55" Nov 6 00:23:28.440810 kubelet[2798]: E1106 00:23:28.440429 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97055c9bbdd2a363cce0087cb75e32d8265ba368683113dfa8729feec7d62c2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:23:28.956791 systemd[1]: run-netns-cni\x2d14b2f2e9\x2dbd33\x2ddf0d\x2deffb\x2d82a962e25a6f.mount: Deactivated successfully. Nov 6 00:23:28.956959 systemd[1]: run-netns-cni\x2d8aedc605\x2d1b1c\x2dc6ed\x2d3341\x2d646409605ba7.mount: Deactivated successfully. Nov 6 00:23:28.957551 systemd[1]: run-netns-cni\x2dfdcc9234\x2d04f6\x2d6b11\x2d589a\x2d8d204fd31525.mount: Deactivated successfully. Nov 6 00:23:28.957715 systemd[1]: run-netns-cni\x2db14839df\x2d29aa\x2d080d\x2de662\x2dce747488dd98.mount: Deactivated successfully. Nov 6 00:23:28.957858 systemd[1]: run-netns-cni\x2d97c14baa\x2db1d3\x2d1a6e\x2d1f43\x2d14138897b84b.mount: Deactivated successfully. Nov 6 00:23:29.164225 systemd[1]: Created slice kubepods-besteffort-podf296fc03_b516_4c28_a887_9cf8255c6651.slice - libcontainer container kubepods-besteffort-podf296fc03_b516_4c28_a887_9cf8255c6651.slice. Nov 6 00:23:29.167959 containerd[1607]: time="2025-11-06T00:23:29.167903550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzjlj,Uid:f296fc03-b516-4c28-a887-9cf8255c6651,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:29.258593 containerd[1607]: time="2025-11-06T00:23:29.258464709Z" level=error msg="Failed to destroy network for sandbox \"376ff6a1cb14d1076ca7dcda61c9a82797bd0dc33208d4b98db28bbad9e4a39a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:29.261508 systemd[1]: run-netns-cni\x2d2369d0fb\x2d4a58\x2d18df\x2d909b\x2d2ce59e7591ee.mount: Deactivated successfully. Nov 6 00:23:29.262675 containerd[1607]: time="2025-11-06T00:23:29.261864879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzjlj,Uid:f296fc03-b516-4c28-a887-9cf8255c6651,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"376ff6a1cb14d1076ca7dcda61c9a82797bd0dc33208d4b98db28bbad9e4a39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:29.262804 kubelet[2798]: E1106 00:23:29.262576 2798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376ff6a1cb14d1076ca7dcda61c9a82797bd0dc33208d4b98db28bbad9e4a39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 00:23:29.262804 kubelet[2798]: E1106 00:23:29.262636 2798 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376ff6a1cb14d1076ca7dcda61c9a82797bd0dc33208d4b98db28bbad9e4a39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:29.262804 kubelet[2798]: E1106 00:23:29.262661 2798 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"376ff6a1cb14d1076ca7dcda61c9a82797bd0dc33208d4b98db28bbad9e4a39a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dzjlj" Nov 6 00:23:29.262914 kubelet[2798]: E1106 00:23:29.262725 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"376ff6a1cb14d1076ca7dcda61c9a82797bd0dc33208d4b98db28bbad9e4a39a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:32.543774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58402342.mount: Deactivated successfully. Nov 6 00:23:32.719759 containerd[1607]: time="2025-11-06T00:23:32.683147459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:32.728904 containerd[1607]: time="2025-11-06T00:23:32.728753048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 6 00:23:32.760500 containerd[1607]: time="2025-11-06T00:23:32.759897783Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:32.761168 containerd[1607]: time="2025-11-06T00:23:32.760851042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 00:23:32.766534 containerd[1607]: time="2025-11-06T00:23:32.766496404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 4.360099119s" Nov 6 00:23:32.766606 containerd[1607]: time="2025-11-06T00:23:32.766538804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 00:23:32.797964 containerd[1607]: time="2025-11-06T00:23:32.797716681Z" level=info msg="CreateContainer within sandbox \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 00:23:32.895589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622191144.mount: Deactivated successfully. Nov 6 00:23:32.895987 containerd[1607]: time="2025-11-06T00:23:32.895914155Z" level=info msg="Container c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:32.948571 containerd[1607]: time="2025-11-06T00:23:32.948520448Z" level=info msg="CreateContainer within sandbox \"72d54fb48cadc5868212695e8feb8ed8a14e212e8f9a838f91eba16107e4b850\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\"" Nov 6 00:23:32.949698 containerd[1607]: time="2025-11-06T00:23:32.949668382Z" level=info msg="StartContainer for \"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\"" Nov 6 00:23:32.969592 containerd[1607]: time="2025-11-06T00:23:32.969495483Z" level=info msg="connecting to shim c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb" address="unix:///run/containerd/s/fb52c49adf209eacfe4a539ddaad476355b9d4ab9cf11c83f43a9638faf987ed" protocol=ttrpc version=3 Nov 6 00:23:33.027270 systemd[1]: Started cri-containerd-c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb.scope - libcontainer container c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb. Nov 6 00:23:33.083884 containerd[1607]: time="2025-11-06T00:23:33.083482319Z" level=info msg="StartContainer for \"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" returns successfully" Nov 6 00:23:33.197475 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 00:23:33.199299 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 00:23:33.429068 kubelet[2798]: I1106 00:23:33.428395 2798 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p26b8\" (UniqueName: \"kubernetes.io/projected/e112540e-f2e0-476b-945f-67b06a61f6cd-kube-api-access-p26b8\") pod \"e112540e-f2e0-476b-945f-67b06a61f6cd\" (UID: \"e112540e-f2e0-476b-945f-67b06a61f6cd\") " Nov 6 00:23:33.429068 kubelet[2798]: I1106 00:23:33.428635 2798 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-ca-bundle\") pod \"e112540e-f2e0-476b-945f-67b06a61f6cd\" (UID: \"e112540e-f2e0-476b-945f-67b06a61f6cd\") " Nov 6 00:23:33.429068 kubelet[2798]: I1106 00:23:33.428997 2798 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-backend-key-pair\") pod \"e112540e-f2e0-476b-945f-67b06a61f6cd\" (UID: \"e112540e-f2e0-476b-945f-67b06a61f6cd\") " Nov 6 00:23:33.442776 kubelet[2798]: I1106 00:23:33.442739 2798 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e112540e-f2e0-476b-945f-67b06a61f6cd-kube-api-access-p26b8" (OuterVolumeSpecName: "kube-api-access-p26b8") pod "e112540e-f2e0-476b-945f-67b06a61f6cd" (UID: "e112540e-f2e0-476b-945f-67b06a61f6cd"). InnerVolumeSpecName "kube-api-access-p26b8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 00:23:33.443043 kubelet[2798]: I1106 00:23:33.442956 2798 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e112540e-f2e0-476b-945f-67b06a61f6cd" (UID: "e112540e-f2e0-476b-945f-67b06a61f6cd"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 00:23:33.452179 kubelet[2798]: I1106 00:23:33.451968 2798 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e112540e-f2e0-476b-945f-67b06a61f6cd" (UID: "e112540e-f2e0-476b-945f-67b06a61f6cd"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 00:23:33.477609 kubelet[2798]: I1106 00:23:33.474688 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hjvm9" podStartSLOduration=1.4696355429999999 podStartE2EDuration="14.474372224s" podCreationTimestamp="2025-11-06 00:23:19 +0000 UTC" firstStartedPulling="2025-11-06 00:23:19.764106795 +0000 UTC m=+22.735940658" lastFinishedPulling="2025-11-06 00:23:32.768843487 +0000 UTC m=+35.740677339" observedRunningTime="2025-11-06 00:23:33.472372052 +0000 UTC m=+36.444205884" watchObservedRunningTime="2025-11-06 00:23:33.474372224 +0000 UTC m=+36.446206067" Nov 6 00:23:33.531075 kubelet[2798]: I1106 00:23:33.531006 2798 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-backend-key-pair\") on node \"ci-4459-1-0-n-bff22aa786\" DevicePath \"\"" Nov 6 00:23:33.531957 kubelet[2798]: I1106 00:23:33.531919 2798 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p26b8\" (UniqueName: \"kubernetes.io/projected/e112540e-f2e0-476b-945f-67b06a61f6cd-kube-api-access-p26b8\") on node \"ci-4459-1-0-n-bff22aa786\" DevicePath \"\"" Nov 6 00:23:33.531957 kubelet[2798]: I1106 00:23:33.531938 2798 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e112540e-f2e0-476b-945f-67b06a61f6cd-whisker-ca-bundle\") on node \"ci-4459-1-0-n-bff22aa786\" DevicePath \"\"" Nov 6 00:23:33.548264 systemd[1]: var-lib-kubelet-pods-e112540e\x2df2e0\x2d476b\x2d945f\x2d67b06a61f6cd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp26b8.mount: Deactivated successfully. Nov 6 00:23:33.548346 systemd[1]: var-lib-kubelet-pods-e112540e\x2df2e0\x2d476b\x2d945f\x2d67b06a61f6cd-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 00:23:33.654153 containerd[1607]: time="2025-11-06T00:23:33.654095865Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"6d6458083afc1a7872755ee716110ec7b27954079a37bdb864a3dfa24d59d9b2\" pid:3913 exit_status:1 exited_at:{seconds:1762388613 nanos:647837183}" Nov 6 00:23:33.754014 systemd[1]: Removed slice kubepods-besteffort-pode112540e_f2e0_476b_945f_67b06a61f6cd.slice - libcontainer container kubepods-besteffort-pode112540e_f2e0_476b_945f_67b06a61f6cd.slice. Nov 6 00:23:33.869519 systemd[1]: Created slice kubepods-besteffort-pod9cab2cb0_7aac_4257_baff_c860234a94ee.slice - libcontainer container kubepods-besteffort-pod9cab2cb0_7aac_4257_baff_c860234a94ee.slice. Nov 6 00:23:33.934524 kubelet[2798]: I1106 00:23:33.934420 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgx74\" (UniqueName: \"kubernetes.io/projected/9cab2cb0-7aac-4257-baff-c860234a94ee-kube-api-access-kgx74\") pod \"whisker-5b54cc9969-225cb\" (UID: \"9cab2cb0-7aac-4257-baff-c860234a94ee\") " pod="calico-system/whisker-5b54cc9969-225cb" Nov 6 00:23:33.934524 kubelet[2798]: I1106 00:23:33.934496 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9cab2cb0-7aac-4257-baff-c860234a94ee-whisker-backend-key-pair\") pod \"whisker-5b54cc9969-225cb\" (UID: \"9cab2cb0-7aac-4257-baff-c860234a94ee\") " pod="calico-system/whisker-5b54cc9969-225cb" Nov 6 00:23:33.934524 kubelet[2798]: I1106 00:23:33.934532 2798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9cab2cb0-7aac-4257-baff-c860234a94ee-whisker-ca-bundle\") pod \"whisker-5b54cc9969-225cb\" (UID: \"9cab2cb0-7aac-4257-baff-c860234a94ee\") " pod="calico-system/whisker-5b54cc9969-225cb" Nov 6 00:23:34.175636 containerd[1607]: time="2025-11-06T00:23:34.175566973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b54cc9969-225cb,Uid:9cab2cb0-7aac-4257-baff-c860234a94ee,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:34.589271 systemd-networkd[1473]: calida9fd4d873a: Link UP Nov 6 00:23:34.589937 systemd-networkd[1473]: calida9fd4d873a: Gained carrier Nov 6 00:23:34.595625 containerd[1607]: time="2025-11-06T00:23:34.595553374Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"bd5e013c238b49702c2d1d292234ba12e94e59599761cef6a2acd253be8d03d5\" pid:3967 exit_status:1 exited_at:{seconds:1762388614 nanos:595142172}" Nov 6 00:23:34.623517 containerd[1607]: 2025-11-06 00:23:34.240 [INFO][3939] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:34.623517 containerd[1607]: 2025-11-06 00:23:34.281 [INFO][3939] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0 whisker-5b54cc9969- calico-system 9cab2cb0-7aac-4257-baff-c860234a94ee 877 0 2025-11-06 00:23:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b54cc9969 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 whisker-5b54cc9969-225cb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calida9fd4d873a [] [] }} ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-" Nov 6 00:23:34.623517 containerd[1607]: 2025-11-06 00:23:34.281 [INFO][3939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.623517 containerd[1607]: 2025-11-06 00:23:34.516 [INFO][3950] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" HandleID="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Workload="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.518 [INFO][3950] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" HandleID="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Workload="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bc820), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-n-bff22aa786", "pod":"whisker-5b54cc9969-225cb", "timestamp":"2025-11-06 00:23:34.516355895 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.518 [INFO][3950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.518 [INFO][3950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.519 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.533 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.546 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.553 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.555 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.623732 containerd[1607]: 2025-11-06 00:23:34.557 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.557 [INFO][3950] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.559 [INFO][3950] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440 Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.565 [INFO][3950] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.571 [INFO][3950] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.1/26] block=192.168.91.0/26 handle="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.571 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.1/26] handle="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.571 [INFO][3950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:34.624567 containerd[1607]: 2025-11-06 00:23:34.571 [INFO][3950] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.1/26] IPv6=[] ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" HandleID="k8s-pod-network.559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Workload="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.624961 containerd[1607]: 2025-11-06 00:23:34.574 [INFO][3939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0", GenerateName:"whisker-5b54cc9969-", Namespace:"calico-system", SelfLink:"", UID:"9cab2cb0-7aac-4257-baff-c860234a94ee", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b54cc9969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"whisker-5b54cc9969-225cb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida9fd4d873a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:34.624961 containerd[1607]: 2025-11-06 00:23:34.574 [INFO][3939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.1/32] ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.625047 containerd[1607]: 2025-11-06 00:23:34.574 [INFO][3939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida9fd4d873a ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.625047 containerd[1607]: 2025-11-06 00:23:34.591 [INFO][3939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.625092 containerd[1607]: 2025-11-06 00:23:34.591 [INFO][3939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0", GenerateName:"whisker-5b54cc9969-", Namespace:"calico-system", SelfLink:"", UID:"9cab2cb0-7aac-4257-baff-c860234a94ee", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b54cc9969", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440", Pod:"whisker-5b54cc9969-225cb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.91.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calida9fd4d873a", MAC:"92:70:2d:51:bf:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:34.625140 containerd[1607]: 2025-11-06 00:23:34.615 [INFO][3939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" Namespace="calico-system" Pod="whisker-5b54cc9969-225cb" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-whisker--5b54cc9969--225cb-eth0" Nov 6 00:23:34.816341 containerd[1607]: time="2025-11-06T00:23:34.816199165Z" level=info msg="connecting to shim 559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440" address="unix:///run/containerd/s/0b2e73fa2be4ac3dda0cdfb6981fe911f123a35f17fa90c773477e41ea930c80" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:34.848369 systemd[1]: Started cri-containerd-559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440.scope - libcontainer container 559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440. Nov 6 00:23:34.932053 containerd[1607]: time="2025-11-06T00:23:34.930786693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b54cc9969-225cb,Uid:9cab2cb0-7aac-4257-baff-c860234a94ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"559a4db0735f90f45b99a08043328615e59fdb311ef61d742eaea6fa5ba0a440\"" Nov 6 00:23:34.946992 containerd[1607]: time="2025-11-06T00:23:34.946947413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:23:35.159677 kubelet[2798]: I1106 00:23:35.159229 2798 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e112540e-f2e0-476b-945f-67b06a61f6cd" path="/var/lib/kubelet/pods/e112540e-f2e0-476b-945f-67b06a61f6cd/volumes" Nov 6 00:23:35.461167 containerd[1607]: time="2025-11-06T00:23:35.460457575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:35.472244 containerd[1607]: time="2025-11-06T00:23:35.470153154Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:23:35.472506 containerd[1607]: time="2025-11-06T00:23:35.470267299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:23:35.472644 kubelet[2798]: E1106 00:23:35.472574 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:35.472720 kubelet[2798]: E1106 00:23:35.472672 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:35.479887 kubelet[2798]: E1106 00:23:35.479771 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf747a2b409420980f64dd3ca00a319,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:35.484056 containerd[1607]: time="2025-11-06T00:23:35.483984815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:23:35.942501 containerd[1607]: time="2025-11-06T00:23:35.942402863Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:35.944452 containerd[1607]: time="2025-11-06T00:23:35.944280597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:23:35.944655 containerd[1607]: time="2025-11-06T00:23:35.944412093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:35.944730 kubelet[2798]: E1106 00:23:35.944636 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:35.944730 kubelet[2798]: E1106 00:23:35.944698 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:35.945064 kubelet[2798]: E1106 00:23:35.944906 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:35.946716 kubelet[2798]: E1106 00:23:35.946642 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:23:36.310475 systemd-networkd[1473]: calida9fd4d873a: Gained IPv6LL Nov 6 00:23:36.460811 kubelet[2798]: E1106 00:23:36.460660 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:23:39.156071 containerd[1607]: time="2025-11-06T00:23:39.155998820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f86cdd547-wv29n,Uid:cc4a9674-9eca-4968-950d-28ec9c7b89e9,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:39.156917 containerd[1607]: time="2025-11-06T00:23:39.156220676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-mrqff,Uid:e926a82a-b4ef-430a-95dd-9253d2a0007c,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:39.346214 systemd-networkd[1473]: cali1579317c98b: Link UP Nov 6 00:23:39.347102 systemd-networkd[1473]: cali1579317c98b: Gained carrier Nov 6 00:23:39.376741 containerd[1607]: 2025-11-06 00:23:39.228 [INFO][4200] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:39.376741 containerd[1607]: 2025-11-06 00:23:39.243 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0 calico-apiserver-5c874f48d- calico-apiserver e926a82a-b4ef-430a-95dd-9253d2a0007c 810 0 2025-11-06 00:23:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c874f48d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 calico-apiserver-5c874f48d-mrqff eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1579317c98b [] [] }} ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-" Nov 6 00:23:39.376741 containerd[1607]: 2025-11-06 00:23:39.243 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.376741 containerd[1607]: 2025-11-06 00:23:39.293 [INFO][4220] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" HandleID="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.293 [INFO][4220] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" HandleID="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-n-bff22aa786", "pod":"calico-apiserver-5c874f48d-mrqff", "timestamp":"2025-11-06 00:23:39.293384558 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.293 [INFO][4220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.293 [INFO][4220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.293 [INFO][4220] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.302 [INFO][4220] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.309 [INFO][4220] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.314 [INFO][4220] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.316 [INFO][4220] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.376930 containerd[1607]: 2025-11-06 00:23:39.319 [INFO][4220] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.319 [INFO][4220] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.320 [INFO][4220] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7 Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.327 [INFO][4220] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.334 [INFO][4220] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.2/26] block=192.168.91.0/26 handle="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.336 [INFO][4220] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.2/26] handle="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.336 [INFO][4220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:39.377132 containerd[1607]: 2025-11-06 00:23:39.336 [INFO][4220] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.2/26] IPv6=[] ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" HandleID="k8s-pod-network.c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.377267 containerd[1607]: 2025-11-06 00:23:39.342 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0", GenerateName:"calico-apiserver-5c874f48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e926a82a-b4ef-430a-95dd-9253d2a0007c", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c874f48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"calico-apiserver-5c874f48d-mrqff", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1579317c98b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:39.377319 containerd[1607]: 2025-11-06 00:23:39.342 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.2/32] ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.377319 containerd[1607]: 2025-11-06 00:23:39.342 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1579317c98b ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.377319 containerd[1607]: 2025-11-06 00:23:39.344 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.377385 containerd[1607]: 2025-11-06 00:23:39.347 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0", GenerateName:"calico-apiserver-5c874f48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"e926a82a-b4ef-430a-95dd-9253d2a0007c", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c874f48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7", Pod:"calico-apiserver-5c874f48d-mrqff", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1579317c98b", MAC:"4e:92:12:1b:09:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:39.377430 containerd[1607]: 2025-11-06 00:23:39.367 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-mrqff" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--mrqff-eth0" Nov 6 00:23:39.404568 containerd[1607]: time="2025-11-06T00:23:39.404514340Z" level=info msg="connecting to shim c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7" address="unix:///run/containerd/s/83b938af17583c82a7c3374e6225e5721a01bb8be07bc0c17f85a1d6877eebbb" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:39.442315 systemd[1]: Started cri-containerd-c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7.scope - libcontainer container c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7. Nov 6 00:23:39.468712 systemd-networkd[1473]: cali9f0ee6b8e75: Link UP Nov 6 00:23:39.470175 systemd-networkd[1473]: cali9f0ee6b8e75: Gained carrier Nov 6 00:23:39.486956 containerd[1607]: 2025-11-06 00:23:39.238 [INFO][4196] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:39.486956 containerd[1607]: 2025-11-06 00:23:39.256 [INFO][4196] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0 calico-kube-controllers-7f86cdd547- calico-system cc4a9674-9eca-4968-950d-28ec9c7b89e9 815 0 2025-11-06 00:23:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f86cdd547 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 calico-kube-controllers-7f86cdd547-wv29n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9f0ee6b8e75 [] [] }} ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-" Nov 6 00:23:39.486956 containerd[1607]: 2025-11-06 00:23:39.257 [INFO][4196] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.486956 containerd[1607]: 2025-11-06 00:23:39.312 [INFO][4225] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" HandleID="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.312 [INFO][4225] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" HandleID="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025a140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-n-bff22aa786", "pod":"calico-kube-controllers-7f86cdd547-wv29n", "timestamp":"2025-11-06 00:23:39.312564891 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.312 [INFO][4225] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.336 [INFO][4225] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.336 [INFO][4225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.402 [INFO][4225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.416 [INFO][4225] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.430 [INFO][4225] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.439 [INFO][4225] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.487736 containerd[1607]: 2025-11-06 00:23:39.446 [INFO][4225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.446 [INFO][4225] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.449 [INFO][4225] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318 Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.456 [INFO][4225] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.463 [INFO][4225] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.3/26] block=192.168.91.0/26 handle="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.463 [INFO][4225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.3/26] handle="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.463 [INFO][4225] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:39.488288 containerd[1607]: 2025-11-06 00:23:39.463 [INFO][4225] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.3/26] IPv6=[] ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" HandleID="k8s-pod-network.e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.488838 containerd[1607]: 2025-11-06 00:23:39.465 [INFO][4196] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0", GenerateName:"calico-kube-controllers-7f86cdd547-", Namespace:"calico-system", SelfLink:"", UID:"cc4a9674-9eca-4968-950d-28ec9c7b89e9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f86cdd547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"calico-kube-controllers-7f86cdd547-wv29n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f0ee6b8e75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:39.488897 containerd[1607]: 2025-11-06 00:23:39.466 [INFO][4196] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.3/32] ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.488897 containerd[1607]: 2025-11-06 00:23:39.466 [INFO][4196] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f0ee6b8e75 ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.488897 containerd[1607]: 2025-11-06 00:23:39.467 [INFO][4196] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.488956 containerd[1607]: 2025-11-06 00:23:39.468 [INFO][4196] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0", GenerateName:"calico-kube-controllers-7f86cdd547-", Namespace:"calico-system", SelfLink:"", UID:"cc4a9674-9eca-4968-950d-28ec9c7b89e9", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f86cdd547", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318", Pod:"calico-kube-controllers-7f86cdd547-wv29n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.91.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9f0ee6b8e75", MAC:"7e:a5:29:21:d9:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:39.489059 containerd[1607]: 2025-11-06 00:23:39.482 [INFO][4196] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" Namespace="calico-system" Pod="calico-kube-controllers-7f86cdd547-wv29n" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--kube--controllers--7f86cdd547--wv29n-eth0" Nov 6 00:23:39.507549 containerd[1607]: time="2025-11-06T00:23:39.507513018Z" level=info msg="connecting to shim e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318" address="unix:///run/containerd/s/e7d0e1fd117315b5d11a7082aee6aef37698514f4eb75bf3fb4e04fceefc4401" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:39.528390 systemd[1]: Started cri-containerd-e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318.scope - libcontainer container e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318. Nov 6 00:23:39.547722 containerd[1607]: time="2025-11-06T00:23:39.547659794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-mrqff,Uid:e926a82a-b4ef-430a-95dd-9253d2a0007c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c345bd2fc125604e5f06fa33d991710f531d94042a909ef8d02e244cdd9fa4f7\"" Nov 6 00:23:39.551054 containerd[1607]: time="2025-11-06T00:23:39.550258870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:39.601766 containerd[1607]: time="2025-11-06T00:23:39.601731835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f86cdd547-wv29n,Uid:cc4a9674-9eca-4968-950d-28ec9c7b89e9,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8655e5e7dbbadbcfc7f64f455560c33c2e6840940e320ff129467e7511d3318\"" Nov 6 00:23:39.985167 containerd[1607]: time="2025-11-06T00:23:39.985059113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:39.986631 containerd[1607]: time="2025-11-06T00:23:39.986542837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:39.986719 containerd[1607]: time="2025-11-06T00:23:39.986663363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:39.986942 kubelet[2798]: E1106 00:23:39.986872 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:39.987447 kubelet[2798]: E1106 00:23:39.986946 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:39.987447 kubelet[2798]: E1106 00:23:39.987286 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ttfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:39.988412 containerd[1607]: time="2025-11-06T00:23:39.988262073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:23:39.990090 kubelet[2798]: E1106 00:23:39.988736 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:23:40.155533 containerd[1607]: time="2025-11-06T00:23:40.155445368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-5fgqv,Uid:a68094c7-e135-44b6-9a5d-a63247f50c8f,Namespace:calico-apiserver,Attempt:0,}" Nov 6 00:23:40.337654 systemd-networkd[1473]: cali4c3e77f66b6: Link UP Nov 6 00:23:40.337896 systemd-networkd[1473]: cali4c3e77f66b6: Gained carrier Nov 6 00:23:40.359256 containerd[1607]: 2025-11-06 00:23:40.225 [INFO][4358] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:40.359256 containerd[1607]: 2025-11-06 00:23:40.241 [INFO][4358] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0 calico-apiserver-5c874f48d- calico-apiserver a68094c7-e135-44b6-9a5d-a63247f50c8f 813 0 2025-11-06 00:23:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c874f48d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 calico-apiserver-5c874f48d-5fgqv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4c3e77f66b6 [] [] }} ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-" Nov 6 00:23:40.359256 containerd[1607]: 2025-11-06 00:23:40.242 [INFO][4358] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.359256 containerd[1607]: 2025-11-06 00:23:40.287 [INFO][4370] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" HandleID="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.287 [INFO][4370] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" HandleID="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb880), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-1-0-n-bff22aa786", "pod":"calico-apiserver-5c874f48d-5fgqv", "timestamp":"2025-11-06 00:23:40.287300819 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.287 [INFO][4370] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.287 [INFO][4370] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.287 [INFO][4370] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.298 [INFO][4370] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.303 [INFO][4370] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.309 [INFO][4370] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.311 [INFO][4370] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.359808 containerd[1607]: 2025-11-06 00:23:40.315 [INFO][4370] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.315 [INFO][4370] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.317 [INFO][4370] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.322 [INFO][4370] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.329 [INFO][4370] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.4/26] block=192.168.91.0/26 handle="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.329 [INFO][4370] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.4/26] handle="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.329 [INFO][4370] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:40.360267 containerd[1607]: 2025-11-06 00:23:40.329 [INFO][4370] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.4/26] IPv6=[] ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" HandleID="k8s-pod-network.f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Workload="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.360551 containerd[1607]: 2025-11-06 00:23:40.333 [INFO][4358] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0", GenerateName:"calico-apiserver-5c874f48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68094c7-e135-44b6-9a5d-a63247f50c8f", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c874f48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"calico-apiserver-5c874f48d-5fgqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c3e77f66b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:40.360646 containerd[1607]: 2025-11-06 00:23:40.333 [INFO][4358] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.4/32] ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.360646 containerd[1607]: 2025-11-06 00:23:40.333 [INFO][4358] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c3e77f66b6 ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.360646 containerd[1607]: 2025-11-06 00:23:40.338 [INFO][4358] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.360751 containerd[1607]: 2025-11-06 00:23:40.338 [INFO][4358] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0", GenerateName:"calico-apiserver-5c874f48d-", Namespace:"calico-apiserver", SelfLink:"", UID:"a68094c7-e135-44b6-9a5d-a63247f50c8f", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c874f48d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd", Pod:"calico-apiserver-5c874f48d-5fgqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.91.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4c3e77f66b6", MAC:"a6:18:99:ed:da:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:40.360877 containerd[1607]: 2025-11-06 00:23:40.355 [INFO][4358] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" Namespace="calico-apiserver" Pod="calico-apiserver-5c874f48d-5fgqv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-calico--apiserver--5c874f48d--5fgqv-eth0" Nov 6 00:23:40.393493 containerd[1607]: time="2025-11-06T00:23:40.393435596Z" level=info msg="connecting to shim f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd" address="unix:///run/containerd/s/52f10f56c88dd831816cbf03a75663dec136f96bf9dafabf971130e489b164e1" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:40.425202 systemd[1]: Started cri-containerd-f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd.scope - libcontainer container f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd. Nov 6 00:23:40.438209 containerd[1607]: time="2025-11-06T00:23:40.438164328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:40.439903 containerd[1607]: time="2025-11-06T00:23:40.439828099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:23:40.440009 containerd[1607]: time="2025-11-06T00:23:40.439916455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:40.440202 kubelet[2798]: E1106 00:23:40.440150 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:40.440299 kubelet[2798]: E1106 00:23:40.440208 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:40.440455 kubelet[2798]: E1106 00:23:40.440368 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:40.442092 kubelet[2798]: E1106 00:23:40.441843 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:23:40.472559 kubelet[2798]: E1106 00:23:40.472430 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:23:40.481987 kubelet[2798]: E1106 00:23:40.481924 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:23:40.517666 containerd[1607]: time="2025-11-06T00:23:40.517562679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c874f48d-5fgqv,Uid:a68094c7-e135-44b6-9a5d-a63247f50c8f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f6f64b2fa073fb74ffa8d9189c5e846ba1ca85aae91af485fb6e9d7a6d03f4dd\"" Nov 6 00:23:40.521802 containerd[1607]: time="2025-11-06T00:23:40.521723043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:40.954983 containerd[1607]: time="2025-11-06T00:23:40.954550436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:40.956625 containerd[1607]: time="2025-11-06T00:23:40.956585733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:40.957210 containerd[1607]: time="2025-11-06T00:23:40.956454557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:40.957784 kubelet[2798]: E1106 00:23:40.957708 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:40.957951 kubelet[2798]: E1106 00:23:40.957907 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:40.961288 kubelet[2798]: E1106 00:23:40.961118 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6mw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:40.962458 kubelet[2798]: E1106 00:23:40.962399 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:23:41.171320 systemd-networkd[1473]: cali1579317c98b: Gained IPv6LL Nov 6 00:23:41.488724 kubelet[2798]: E1106 00:23:41.488650 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:23:41.489955 kubelet[2798]: E1106 00:23:41.489159 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:23:41.489955 kubelet[2798]: E1106 00:23:41.489268 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:23:41.491720 systemd-networkd[1473]: cali9f0ee6b8e75: Gained IPv6LL Nov 6 00:23:42.155439 containerd[1607]: time="2025-11-06T00:23:42.155318689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4hmv,Uid:f5803a11-2eb6-4be9-9c20-ca22054363db,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:42.323169 systemd-networkd[1473]: cali4c3e77f66b6: Gained IPv6LL Nov 6 00:23:42.328502 systemd-networkd[1473]: cali58ef0f69ce3: Link UP Nov 6 00:23:42.329519 systemd-networkd[1473]: cali58ef0f69ce3: Gained carrier Nov 6 00:23:42.362188 containerd[1607]: 2025-11-06 00:23:42.233 [INFO][4474] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:42.362188 containerd[1607]: 2025-11-06 00:23:42.247 [INFO][4474] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0 coredns-674b8bbfcf- kube-system f5803a11-2eb6-4be9-9c20-ca22054363db 806 0 2025-11-06 00:23:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 coredns-674b8bbfcf-m4hmv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58ef0f69ce3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-" Nov 6 00:23:42.362188 containerd[1607]: 2025-11-06 00:23:42.247 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.362188 containerd[1607]: 2025-11-06 00:23:42.278 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" HandleID="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Workload="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.279 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" HandleID="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Workload="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-n-bff22aa786", "pod":"coredns-674b8bbfcf-m4hmv", "timestamp":"2025-11-06 00:23:42.278932342 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.279 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.279 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.279 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.290 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.296 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.300 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.302 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.362475 containerd[1607]: 2025-11-06 00:23:42.305 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.305 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.306 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.311 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.319 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.5/26] block=192.168.91.0/26 handle="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.319 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.5/26] handle="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.319 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:42.363280 containerd[1607]: 2025-11-06 00:23:42.319 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.5/26] IPv6=[] ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" HandleID="k8s-pod-network.3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Workload="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.363502 containerd[1607]: 2025-11-06 00:23:42.322 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f5803a11-2eb6-4be9-9c20-ca22054363db", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"coredns-674b8bbfcf-m4hmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58ef0f69ce3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:42.363502 containerd[1607]: 2025-11-06 00:23:42.322 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.5/32] ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.363502 containerd[1607]: 2025-11-06 00:23:42.322 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58ef0f69ce3 ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.363502 containerd[1607]: 2025-11-06 00:23:42.330 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.363502 containerd[1607]: 2025-11-06 00:23:42.331 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"f5803a11-2eb6-4be9-9c20-ca22054363db", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f", Pod:"coredns-674b8bbfcf-m4hmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58ef0f69ce3", MAC:"1a:f2:10:ec:20:0a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:42.363502 containerd[1607]: 2025-11-06 00:23:42.354 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" Namespace="kube-system" Pod="coredns-674b8bbfcf-m4hmv" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--m4hmv-eth0" Nov 6 00:23:42.407320 containerd[1607]: time="2025-11-06T00:23:42.406313221Z" level=info msg="connecting to shim 3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f" address="unix:///run/containerd/s/356cf71c6e53138c1e6734be672c84f6563c545970428c93c56da3b711e51377" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:42.433207 systemd[1]: Started cri-containerd-3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f.scope - libcontainer container 3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f. Nov 6 00:23:42.487528 containerd[1607]: time="2025-11-06T00:23:42.487383694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-m4hmv,Uid:f5803a11-2eb6-4be9-9c20-ca22054363db,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f\"" Nov 6 00:23:42.491000 kubelet[2798]: E1106 00:23:42.490854 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:23:42.500466 containerd[1607]: time="2025-11-06T00:23:42.500434227Z" level=info msg="CreateContainer within sandbox \"3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:23:42.521316 containerd[1607]: time="2025-11-06T00:23:42.521258923Z" level=info msg="Container bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:42.524025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618482435.mount: Deactivated successfully. Nov 6 00:23:42.531672 containerd[1607]: time="2025-11-06T00:23:42.531544248Z" level=info msg="CreateContainer within sandbox \"3b6646698e48ea415e1bfc6b4df6bf3e7c529a47091ef69c3d891efd7be2596f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082\"" Nov 6 00:23:42.532299 containerd[1607]: time="2025-11-06T00:23:42.532238470Z" level=info msg="StartContainer for \"bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082\"" Nov 6 00:23:42.533788 containerd[1607]: time="2025-11-06T00:23:42.533755726Z" level=info msg="connecting to shim bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082" address="unix:///run/containerd/s/356cf71c6e53138c1e6734be672c84f6563c545970428c93c56da3b711e51377" protocol=ttrpc version=3 Nov 6 00:23:42.556341 systemd[1]: Started cri-containerd-bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082.scope - libcontainer container bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082. Nov 6 00:23:42.599620 containerd[1607]: time="2025-11-06T00:23:42.599562275Z" level=info msg="StartContainer for \"bbfd442a1fc8881a00340e48e235504e04f0b6e0c7da452cc200de06a53d1082\" returns successfully" Nov 6 00:23:43.157486 containerd[1607]: time="2025-11-06T00:23:43.157340024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzk74,Uid:b5d1eeb4-2160-4f69-990a-83733d0cf15b,Namespace:kube-system,Attempt:0,}" Nov 6 00:23:43.161656 containerd[1607]: time="2025-11-06T00:23:43.161307006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j4x55,Uid:7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:43.350738 systemd-networkd[1473]: calif15f754563e: Link UP Nov 6 00:23:43.351612 systemd-networkd[1473]: calif15f754563e: Gained carrier Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.264 [INFO][4606] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.279 [INFO][4606] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0 goldmane-666569f655- calico-system 7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a 816 0 2025-11-06 00:23:17 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 goldmane-666569f655-j4x55 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif15f754563e [] [] }} ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.280 [INFO][4606] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.312 [INFO][4624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" HandleID="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Workload="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.312 [INFO][4624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" HandleID="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Workload="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-n-bff22aa786", "pod":"goldmane-666569f655-j4x55", "timestamp":"2025-11-06 00:23:43.312052553 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.312 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.312 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.312 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.320 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.324 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.328 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.329 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.331 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.331 [INFO][4624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.332 [INFO][4624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.336 [INFO][4624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.340 [INFO][4624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.6/26] block=192.168.91.0/26 handle="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.340 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.6/26] handle="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.340 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:43.367017 containerd[1607]: 2025-11-06 00:23:43.341 [INFO][4624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.6/26] IPv6=[] ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" HandleID="k8s-pod-network.29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Workload="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.367671 containerd[1607]: 2025-11-06 00:23:43.345 [INFO][4606] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"goldmane-666569f655-j4x55", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif15f754563e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:43.367671 containerd[1607]: 2025-11-06 00:23:43.345 [INFO][4606] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.6/32] ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.367671 containerd[1607]: 2025-11-06 00:23:43.345 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif15f754563e ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.367671 containerd[1607]: 2025-11-06 00:23:43.353 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.367671 containerd[1607]: 2025-11-06 00:23:43.353 [INFO][4606] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a", Pod:"goldmane-666569f655-j4x55", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.91.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif15f754563e", MAC:"ce:ac:7d:5e:1b:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:43.367671 containerd[1607]: 2025-11-06 00:23:43.364 [INFO][4606] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" Namespace="calico-system" Pod="goldmane-666569f655-j4x55" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-goldmane--666569f655--j4x55-eth0" Nov 6 00:23:43.390324 containerd[1607]: time="2025-11-06T00:23:43.390278169Z" level=info msg="connecting to shim 29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a" address="unix:///run/containerd/s/a78ed8ebf9cb62a52bd7e6a83beaafbf0a140e6fd460ef053959d094265ffb99" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:43.421178 systemd[1]: Started cri-containerd-29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a.scope - libcontainer container 29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a. Nov 6 00:23:43.471071 systemd-networkd[1473]: calib5e290bc18a: Link UP Nov 6 00:23:43.471201 systemd-networkd[1473]: calib5e290bc18a: Gained carrier Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.279 [INFO][4615] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.290 [INFO][4615] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0 coredns-674b8bbfcf- kube-system b5d1eeb4-2160-4f69-990a-83733d0cf15b 814 0 2025-11-06 00:23:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 coredns-674b8bbfcf-fzk74 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib5e290bc18a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.290 [INFO][4615] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.318 [INFO][4629] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" HandleID="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Workload="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.319 [INFO][4629] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" HandleID="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Workload="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-1-0-n-bff22aa786", "pod":"coredns-674b8bbfcf-fzk74", "timestamp":"2025-11-06 00:23:43.318868158 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.319 [INFO][4629] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.341 [INFO][4629] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.341 [INFO][4629] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.422 [INFO][4629] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.428 [INFO][4629] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.432 [INFO][4629] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.434 [INFO][4629] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.436 [INFO][4629] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.436 [INFO][4629] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.437 [INFO][4629] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7 Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.449 [INFO][4629] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.456 [INFO][4629] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.7/26] block=192.168.91.0/26 handle="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.456 [INFO][4629] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.7/26] handle="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.456 [INFO][4629] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:43.487079 containerd[1607]: 2025-11-06 00:23:43.456 [INFO][4629] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.7/26] IPv6=[] ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" HandleID="k8s-pod-network.9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Workload="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.488983 containerd[1607]: 2025-11-06 00:23:43.461 [INFO][4615] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b5d1eeb4-2160-4f69-990a-83733d0cf15b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"coredns-674b8bbfcf-fzk74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5e290bc18a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:43.488983 containerd[1607]: 2025-11-06 00:23:43.461 [INFO][4615] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.7/32] ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.488983 containerd[1607]: 2025-11-06 00:23:43.461 [INFO][4615] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib5e290bc18a ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.488983 containerd[1607]: 2025-11-06 00:23:43.470 [INFO][4615] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.488983 containerd[1607]: 2025-11-06 00:23:43.470 [INFO][4615] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b5d1eeb4-2160-4f69-990a-83733d0cf15b", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7", Pod:"coredns-674b8bbfcf-fzk74", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.91.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib5e290bc18a", MAC:"a2:b4:9e:2e:b3:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:43.488983 containerd[1607]: 2025-11-06 00:23:43.482 [INFO][4615] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fzk74" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-coredns--674b8bbfcf--fzk74-eth0" Nov 6 00:23:43.511128 kubelet[2798]: I1106 00:23:43.511074 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-m4hmv" podStartSLOduration=40.511059224 podStartE2EDuration="40.511059224s" podCreationTimestamp="2025-11-06 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:43.509075363 +0000 UTC m=+46.480909205" watchObservedRunningTime="2025-11-06 00:23:43.511059224 +0000 UTC m=+46.482893067" Nov 6 00:23:43.522801 containerd[1607]: time="2025-11-06T00:23:43.522765235Z" level=info msg="connecting to shim 9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7" address="unix:///run/containerd/s/04d1777906c886a21fbaff1a3366f5f645fba80dc0c63dc96352bcdbbe3c4ab2" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:43.526928 containerd[1607]: time="2025-11-06T00:23:43.526448193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-j4x55,Uid:7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a,Namespace:calico-system,Attempt:0,} returns sandbox id \"29e0457f83f0b6ce6aabd89959785f6306e450b0089cf4a412b77b298a2fb83a\"" Nov 6 00:23:43.531020 containerd[1607]: time="2025-11-06T00:23:43.531002748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:23:43.555266 systemd[1]: Started cri-containerd-9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7.scope - libcontainer container 9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7. Nov 6 00:23:43.609477 containerd[1607]: time="2025-11-06T00:23:43.609401778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fzk74,Uid:b5d1eeb4-2160-4f69-990a-83733d0cf15b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7\"" Nov 6 00:23:43.614666 containerd[1607]: time="2025-11-06T00:23:43.614640335Z" level=info msg="CreateContainer within sandbox \"9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 00:23:43.621284 containerd[1607]: time="2025-11-06T00:23:43.621242931Z" level=info msg="Container 284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:23:43.626242 containerd[1607]: time="2025-11-06T00:23:43.626202004Z" level=info msg="CreateContainer within sandbox \"9dca6427551628d76304948421a86b90b416fec509db92134601eb5741f56fb7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4\"" Nov 6 00:23:43.626668 containerd[1607]: time="2025-11-06T00:23:43.626642160Z" level=info msg="StartContainer for \"284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4\"" Nov 6 00:23:43.627522 containerd[1607]: time="2025-11-06T00:23:43.627476104Z" level=info msg="connecting to shim 284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4" address="unix:///run/containerd/s/04d1777906c886a21fbaff1a3366f5f645fba80dc0c63dc96352bcdbbe3c4ab2" protocol=ttrpc version=3 Nov 6 00:23:43.649155 systemd[1]: Started cri-containerd-284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4.scope - libcontainer container 284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4. Nov 6 00:23:43.678088 containerd[1607]: time="2025-11-06T00:23:43.677961110Z" level=info msg="StartContainer for \"284007572fb31387a9853fb9c0890622b127d92709c36fc31321bd38c8bc26f4\" returns successfully" Nov 6 00:23:43.988323 containerd[1607]: time="2025-11-06T00:23:43.988085481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:43.989563 containerd[1607]: time="2025-11-06T00:23:43.989442146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:23:43.989563 containerd[1607]: time="2025-11-06T00:23:43.989529198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:43.989816 kubelet[2798]: E1106 00:23:43.989757 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:43.989908 kubelet[2798]: E1106 00:23:43.989860 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:43.991224 kubelet[2798]: E1106 00:23:43.991159 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:43.992778 kubelet[2798]: E1106 00:23:43.992325 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:23:44.115277 systemd-networkd[1473]: cali58ef0f69ce3: Gained IPv6LL Nov 6 00:23:44.156511 containerd[1607]: time="2025-11-06T00:23:44.156392558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzjlj,Uid:f296fc03-b516-4c28-a887-9cf8255c6651,Namespace:calico-system,Attempt:0,}" Nov 6 00:23:44.335718 systemd-networkd[1473]: caliae9bf86e24c: Link UP Nov 6 00:23:44.337227 systemd-networkd[1473]: caliae9bf86e24c: Gained carrier Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.225 [INFO][4801] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.245 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0 csi-node-driver- calico-system f296fc03-b516-4c28-a887-9cf8255c6651 709 0 2025-11-06 00:23:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-1-0-n-bff22aa786 csi-node-driver-dzjlj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliae9bf86e24c [] [] }} ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.245 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.287 [INFO][4812] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" HandleID="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Workload="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.287 [INFO][4812] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" HandleID="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Workload="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-1-0-n-bff22aa786", "pod":"csi-node-driver-dzjlj", "timestamp":"2025-11-06 00:23:44.287097218 +0000 UTC"}, Hostname:"ci-4459-1-0-n-bff22aa786", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.287 [INFO][4812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.288 [INFO][4812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.288 [INFO][4812] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-1-0-n-bff22aa786' Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.296 [INFO][4812] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.300 [INFO][4812] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.306 [INFO][4812] ipam/ipam.go 511: Trying affinity for 192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.308 [INFO][4812] ipam/ipam.go 158: Attempting to load block cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.313 [INFO][4812] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.91.0/26 host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.313 [INFO][4812] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.91.0/26 handle="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.315 [INFO][4812] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2 Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.320 [INFO][4812] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.91.0/26 handle="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.329 [INFO][4812] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.91.8/26] block=192.168.91.0/26 handle="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.329 [INFO][4812] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.91.8/26] handle="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" host="ci-4459-1-0-n-bff22aa786" Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.329 [INFO][4812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 00:23:44.357129 containerd[1607]: 2025-11-06 00:23:44.329 [INFO][4812] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.91.8/26] IPv6=[] ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" HandleID="k8s-pod-network.cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Workload="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.358453 containerd[1607]: 2025-11-06 00:23:44.332 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f296fc03-b516-4c28-a887-9cf8255c6651", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"", Pod:"csi-node-driver-dzjlj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae9bf86e24c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:44.358453 containerd[1607]: 2025-11-06 00:23:44.332 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.91.8/32] ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.358453 containerd[1607]: 2025-11-06 00:23:44.332 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae9bf86e24c ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.358453 containerd[1607]: 2025-11-06 00:23:44.338 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.358453 containerd[1607]: 2025-11-06 00:23:44.338 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f296fc03-b516-4c28-a887-9cf8255c6651", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-1-0-n-bff22aa786", ContainerID:"cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2", Pod:"csi-node-driver-dzjlj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.91.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliae9bf86e24c", MAC:"6a:bc:56:c3:ca:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 00:23:44.358453 containerd[1607]: 2025-11-06 00:23:44.349 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" Namespace="calico-system" Pod="csi-node-driver-dzjlj" WorkloadEndpoint="ci--4459--1--0--n--bff22aa786-k8s-csi--node--driver--dzjlj-eth0" Nov 6 00:23:44.384648 containerd[1607]: time="2025-11-06T00:23:44.384445444Z" level=info msg="connecting to shim cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2" address="unix:///run/containerd/s/78ad7017b3ff21d731548f32e797c1431de52072a57e24768714a001ae535773" namespace=k8s.io protocol=ttrpc version=3 Nov 6 00:23:44.413155 systemd[1]: Started cri-containerd-cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2.scope - libcontainer container cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2. Nov 6 00:23:44.449740 containerd[1607]: time="2025-11-06T00:23:44.449694205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dzjlj,Uid:f296fc03-b516-4c28-a887-9cf8255c6651,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd3f2d8fa275de8ef3549faaa21c215d3dccbd6ff923e3f785f173930bed30a2\"" Nov 6 00:23:44.460222 containerd[1607]: time="2025-11-06T00:23:44.459790324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:23:44.500053 kubelet[2798]: E1106 00:23:44.499824 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:23:44.897508 containerd[1607]: time="2025-11-06T00:23:44.897415006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:44.899715 containerd[1607]: time="2025-11-06T00:23:44.899645169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:23:44.899899 containerd[1607]: time="2025-11-06T00:23:44.899756088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:23:44.900086 kubelet[2798]: E1106 00:23:44.899994 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:23:44.900811 kubelet[2798]: E1106 00:23:44.900089 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:23:44.903456 kubelet[2798]: E1106 00:23:44.903302 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:44.906197 containerd[1607]: time="2025-11-06T00:23:44.906141977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:23:45.011667 systemd-networkd[1473]: calif15f754563e: Gained IPv6LL Nov 6 00:23:45.341478 containerd[1607]: time="2025-11-06T00:23:45.341382391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:45.343724 containerd[1607]: time="2025-11-06T00:23:45.343646668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:23:45.344067 containerd[1607]: time="2025-11-06T00:23:45.343698826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:23:45.344925 kubelet[2798]: E1106 00:23:45.344529 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:23:45.344925 kubelet[2798]: E1106 00:23:45.344592 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:23:45.344925 kubelet[2798]: E1106 00:23:45.344779 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:45.347282 kubelet[2798]: E1106 00:23:45.346392 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:45.395564 systemd-networkd[1473]: caliae9bf86e24c: Gained IPv6LL Nov 6 00:23:45.509349 kubelet[2798]: E1106 00:23:45.509271 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:23:45.510972 kubelet[2798]: E1106 00:23:45.510872 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:23:45.523345 systemd-networkd[1473]: calib5e290bc18a: Gained IPv6LL Nov 6 00:23:45.528676 kubelet[2798]: I1106 00:23:45.528580 2798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fzk74" podStartSLOduration=42.528555786 podStartE2EDuration="42.528555786s" podCreationTimestamp="2025-11-06 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 00:23:44.527774857 +0000 UTC m=+47.499608699" watchObservedRunningTime="2025-11-06 00:23:45.528555786 +0000 UTC m=+48.500389668" Nov 6 00:23:47.623371 kubelet[2798]: I1106 00:23:47.623165 2798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 00:23:48.852384 systemd-networkd[1473]: vxlan.calico: Link UP Nov 6 00:23:48.852394 systemd-networkd[1473]: vxlan.calico: Gained carrier Nov 6 00:23:50.131293 systemd-networkd[1473]: vxlan.calico: Gained IPv6LL Nov 6 00:23:50.157246 containerd[1607]: time="2025-11-06T00:23:50.156593705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:23:50.586924 containerd[1607]: time="2025-11-06T00:23:50.586804811Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:50.588871 containerd[1607]: time="2025-11-06T00:23:50.588781632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:23:50.589004 containerd[1607]: time="2025-11-06T00:23:50.588895166Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:23:50.589343 kubelet[2798]: E1106 00:23:50.589218 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:50.591604 kubelet[2798]: E1106 00:23:50.589414 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:23:50.591604 kubelet[2798]: E1106 00:23:50.589729 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf747a2b409420980f64dd3ca00a319,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:50.594345 containerd[1607]: time="2025-11-06T00:23:50.594270432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:23:51.033955 containerd[1607]: time="2025-11-06T00:23:51.033869046Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:51.035541 containerd[1607]: time="2025-11-06T00:23:51.035475653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:23:51.035732 containerd[1607]: time="2025-11-06T00:23:51.035583254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:51.035891 kubelet[2798]: E1106 00:23:51.035810 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:51.035891 kubelet[2798]: E1106 00:23:51.035874 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:23:51.036274 kubelet[2798]: E1106 00:23:51.036143 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:51.037899 kubelet[2798]: E1106 00:23:51.037457 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:23:53.160523 containerd[1607]: time="2025-11-06T00:23:53.159681983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:23:53.600151 containerd[1607]: time="2025-11-06T00:23:53.600025030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:53.602331 containerd[1607]: time="2025-11-06T00:23:53.602230832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:23:53.602536 containerd[1607]: time="2025-11-06T00:23:53.602382377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:23:53.603173 kubelet[2798]: E1106 00:23:53.602566 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:53.603173 kubelet[2798]: E1106 00:23:53.602637 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:23:53.603173 kubelet[2798]: E1106 00:23:53.602947 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:53.605006 containerd[1607]: time="2025-11-06T00:23:53.604626992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:53.634738 kubelet[2798]: E1106 00:23:53.634658 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:23:54.044835 containerd[1607]: time="2025-11-06T00:23:54.044764671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:54.046971 containerd[1607]: time="2025-11-06T00:23:54.046879752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:54.047594 containerd[1607]: time="2025-11-06T00:23:54.047098263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:54.048234 kubelet[2798]: E1106 00:23:54.047749 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:54.048234 kubelet[2798]: E1106 00:23:54.047850 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:54.048234 kubelet[2798]: E1106 00:23:54.048128 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ttfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:54.049823 kubelet[2798]: E1106 00:23:54.049764 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:23:55.157661 containerd[1607]: time="2025-11-06T00:23:55.157251325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:23:55.597139 containerd[1607]: time="2025-11-06T00:23:55.597001454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:55.598927 containerd[1607]: time="2025-11-06T00:23:55.598805763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:23:55.599088 containerd[1607]: time="2025-11-06T00:23:55.598965923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:55.599324 kubelet[2798]: E1106 00:23:55.599268 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:55.599879 kubelet[2798]: E1106 00:23:55.599439 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:23:55.599879 kubelet[2798]: E1106 00:23:55.599693 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6mw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:55.601405 kubelet[2798]: E1106 00:23:55.601296 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:23:57.235983 containerd[1607]: time="2025-11-06T00:23:57.235930952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:23:57.691761 containerd[1607]: time="2025-11-06T00:23:57.691656817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:23:57.693866 containerd[1607]: time="2025-11-06T00:23:57.693816162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:23:57.693970 containerd[1607]: time="2025-11-06T00:23:57.693919335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:23:57.694596 kubelet[2798]: E1106 00:23:57.694131 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:57.694596 kubelet[2798]: E1106 00:23:57.694210 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:23:57.694596 kubelet[2798]: E1106 00:23:57.694449 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:23:57.696498 kubelet[2798]: E1106 00:23:57.696417 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:24:00.156505 containerd[1607]: time="2025-11-06T00:24:00.156431597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:24:00.590840 containerd[1607]: time="2025-11-06T00:24:00.590787869Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:00.592633 containerd[1607]: time="2025-11-06T00:24:00.592534699Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:24:00.594410 containerd[1607]: time="2025-11-06T00:24:00.592635708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:24:00.594492 kubelet[2798]: E1106 00:24:00.592838 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:00.594492 kubelet[2798]: E1106 00:24:00.592903 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:00.594492 kubelet[2798]: E1106 00:24:00.593103 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:00.597004 containerd[1607]: time="2025-11-06T00:24:00.596898632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:24:01.046586 containerd[1607]: time="2025-11-06T00:24:01.046498610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:01.048176 containerd[1607]: time="2025-11-06T00:24:01.048120055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:24:01.048909 containerd[1607]: time="2025-11-06T00:24:01.048222337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:24:01.048978 kubelet[2798]: E1106 00:24:01.048468 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:01.048978 kubelet[2798]: E1106 00:24:01.048537 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:01.048978 kubelet[2798]: E1106 00:24:01.048735 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:01.050890 kubelet[2798]: E1106 00:24:01.050810 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:24:01.161322 kubelet[2798]: E1106 00:24:01.161232 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:24:04.596782 containerd[1607]: time="2025-11-06T00:24:04.596709034Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"80ab10563fb41b831a38352df26423c24833b19fed5400a9bc8bd1e03124c688\" pid:5113 exited_at:{seconds:1762388644 nanos:596185371}" Nov 6 00:24:06.156202 kubelet[2798]: E1106 00:24:06.155920 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:24:07.159367 kubelet[2798]: E1106 00:24:07.158196 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:24:08.156386 kubelet[2798]: E1106 00:24:08.156284 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:24:12.156523 kubelet[2798]: E1106 00:24:12.156462 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:24:13.161837 kubelet[2798]: E1106 00:24:13.161745 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:24:14.158132 containerd[1607]: time="2025-11-06T00:24:14.157982305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:24:14.604951 containerd[1607]: time="2025-11-06T00:24:14.604788124Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:14.607148 containerd[1607]: time="2025-11-06T00:24:14.606345127Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:24:14.607245 containerd[1607]: time="2025-11-06T00:24:14.607227663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:24:14.607437 kubelet[2798]: E1106 00:24:14.607403 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:24:14.608297 kubelet[2798]: E1106 00:24:14.607725 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:24:14.608297 kubelet[2798]: E1106 00:24:14.608222 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf747a2b409420980f64dd3ca00a319,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:14.611302 containerd[1607]: time="2025-11-06T00:24:14.611244442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:24:15.042078 containerd[1607]: time="2025-11-06T00:24:15.042006127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:15.043201 containerd[1607]: time="2025-11-06T00:24:15.043168958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:24:15.043334 containerd[1607]: time="2025-11-06T00:24:15.043273053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:15.043937 kubelet[2798]: E1106 00:24:15.043498 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:24:15.043937 kubelet[2798]: E1106 00:24:15.043547 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:24:15.043937 kubelet[2798]: E1106 00:24:15.043650 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:15.045142 kubelet[2798]: E1106 00:24:15.045114 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:24:18.157570 containerd[1607]: time="2025-11-06T00:24:18.157229858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:24:18.610927 containerd[1607]: time="2025-11-06T00:24:18.610842508Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:18.612269 containerd[1607]: time="2025-11-06T00:24:18.612202599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:24:18.612269 containerd[1607]: time="2025-11-06T00:24:18.612293299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:18.613158 kubelet[2798]: E1106 00:24:18.612596 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:18.613158 kubelet[2798]: E1106 00:24:18.612675 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:18.613158 kubelet[2798]: E1106 00:24:18.612892 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:18.614283 kubelet[2798]: E1106 00:24:18.614208 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:24:21.157049 containerd[1607]: time="2025-11-06T00:24:21.155858115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:24:21.588699 containerd[1607]: time="2025-11-06T00:24:21.588609457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:21.590490 containerd[1607]: time="2025-11-06T00:24:21.590343571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:24:21.590490 containerd[1607]: time="2025-11-06T00:24:21.590462283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:21.590685 kubelet[2798]: E1106 00:24:21.590618 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:21.590685 kubelet[2798]: E1106 00:24:21.590680 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:21.591077 kubelet[2798]: E1106 00:24:21.590903 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ttfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:21.591731 containerd[1607]: time="2025-11-06T00:24:21.591718311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:24:21.592225 kubelet[2798]: E1106 00:24:21.592174 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:24:22.022797 containerd[1607]: time="2025-11-06T00:24:22.022256709Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:22.024294 containerd[1607]: time="2025-11-06T00:24:22.024127440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:24:22.024568 containerd[1607]: time="2025-11-06T00:24:22.024209293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:22.024990 kubelet[2798]: E1106 00:24:22.024929 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:22.025208 kubelet[2798]: E1106 00:24:22.025099 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:24:22.025542 kubelet[2798]: E1106 00:24:22.025470 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6mw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:22.026820 kubelet[2798]: E1106 00:24:22.026744 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:24:25.157902 containerd[1607]: time="2025-11-06T00:24:25.157854371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:24:25.602129 containerd[1607]: time="2025-11-06T00:24:25.602075365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:25.603565 containerd[1607]: time="2025-11-06T00:24:25.603524594Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:24:25.603635 containerd[1607]: time="2025-11-06T00:24:25.603596970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:24:25.603805 kubelet[2798]: E1106 00:24:25.603762 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:25.604177 kubelet[2798]: E1106 00:24:25.603810 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:24:25.604177 kubelet[2798]: E1106 00:24:25.603955 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:25.626015 containerd[1607]: time="2025-11-06T00:24:25.625965267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:24:26.060543 containerd[1607]: time="2025-11-06T00:24:26.060505952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:26.062393 containerd[1607]: time="2025-11-06T00:24:26.062284009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:24:26.062393 containerd[1607]: time="2025-11-06T00:24:26.062360713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:24:26.062530 kubelet[2798]: E1106 00:24:26.062490 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:26.062588 kubelet[2798]: E1106 00:24:26.062545 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:24:26.062729 kubelet[2798]: E1106 00:24:26.062675 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:26.064283 kubelet[2798]: E1106 00:24:26.064233 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:24:26.158532 containerd[1607]: time="2025-11-06T00:24:26.158211163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:24:26.752800 containerd[1607]: time="2025-11-06T00:24:26.752724091Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:26.754462 containerd[1607]: time="2025-11-06T00:24:26.754332469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:24:26.754638 containerd[1607]: time="2025-11-06T00:24:26.754408963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:24:26.754839 kubelet[2798]: E1106 00:24:26.754708 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:24:26.755670 kubelet[2798]: E1106 00:24:26.754773 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:24:26.755670 kubelet[2798]: E1106 00:24:26.755336 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:26.756798 kubelet[2798]: E1106 00:24:26.756724 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:24:29.156524 kubelet[2798]: E1106 00:24:29.156472 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:24:30.157675 kubelet[2798]: E1106 00:24:30.157610 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:24:34.541177 containerd[1607]: time="2025-11-06T00:24:34.541131542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"df4aeb3655268be39948baba9206c213b80f5d1f023eb8b39a448db4d42150d3\" pid:5156 exited_at:{seconds:1762388674 nanos:540658384}" Nov 6 00:24:35.156117 kubelet[2798]: E1106 00:24:35.155677 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:24:36.157734 kubelet[2798]: E1106 00:24:36.157667 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:24:41.162596 kubelet[2798]: E1106 00:24:41.161524 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:24:41.166137 kubelet[2798]: E1106 00:24:41.164028 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:24:41.166137 kubelet[2798]: E1106 00:24:41.163108 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:24:45.158600 kubelet[2798]: E1106 00:24:45.158554 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:24:48.156291 kubelet[2798]: E1106 00:24:48.156258 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:24:50.157663 kubelet[2798]: E1106 00:24:50.157612 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:24:52.156845 kubelet[2798]: E1106 00:24:52.156378 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:24:53.162054 kubelet[2798]: E1106 00:24:53.161975 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:24:54.158315 kubelet[2798]: E1106 00:24:54.157596 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:24:59.157706 containerd[1607]: time="2025-11-06T00:24:59.157637417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:24:59.590055 containerd[1607]: time="2025-11-06T00:24:59.589963913Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:24:59.591475 containerd[1607]: time="2025-11-06T00:24:59.591419785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:24:59.592395 containerd[1607]: time="2025-11-06T00:24:59.591515094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:24:59.593047 kubelet[2798]: E1106 00:24:59.591792 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:59.593047 kubelet[2798]: E1106 00:24:59.591854 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:24:59.601414 kubelet[2798]: E1106 00:24:59.592309 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:24:59.603399 kubelet[2798]: E1106 00:24:59.603369 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:25:01.155602 kubelet[2798]: E1106 00:25:01.155152 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:25:03.161393 containerd[1607]: time="2025-11-06T00:25:03.161311638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:03.594727 containerd[1607]: time="2025-11-06T00:25:03.594649229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:03.597366 containerd[1607]: time="2025-11-06T00:25:03.597168264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:03.597672 containerd[1607]: time="2025-11-06T00:25:03.597325630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:03.598476 kubelet[2798]: E1106 00:25:03.598178 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:03.598476 kubelet[2798]: E1106 00:25:03.598241 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:03.599525 kubelet[2798]: E1106 00:25:03.599134 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6mw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:03.601134 kubelet[2798]: E1106 00:25:03.601061 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:25:04.640822 containerd[1607]: time="2025-11-06T00:25:04.640782193Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"62cf05b008b3dc2507cd3131a7698e73f3065fb3bb3db201c1be42900ef6e88a\" pid:5185 exited_at:{seconds:1762388704 nanos:640341237}" Nov 6 00:25:06.155333 kubelet[2798]: E1106 00:25:06.155279 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:25:07.160605 containerd[1607]: time="2025-11-06T00:25:07.160506875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:25:07.607081 containerd[1607]: time="2025-11-06T00:25:07.606249938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:07.609864 containerd[1607]: time="2025-11-06T00:25:07.609754332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:25:07.610267 containerd[1607]: time="2025-11-06T00:25:07.609820457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:25:07.610820 kubelet[2798]: E1106 00:25:07.610603 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:07.613207 kubelet[2798]: E1106 00:25:07.610782 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:25:07.613207 kubelet[2798]: E1106 00:25:07.611730 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:07.615526 containerd[1607]: time="2025-11-06T00:25:07.615354779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:25:08.049662 containerd[1607]: time="2025-11-06T00:25:08.049604429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:08.051311 containerd[1607]: time="2025-11-06T00:25:08.051260627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:25:08.051425 containerd[1607]: time="2025-11-06T00:25:08.051345836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:25:08.051798 kubelet[2798]: E1106 00:25:08.051710 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:08.051798 kubelet[2798]: E1106 00:25:08.051777 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:25:08.052109 kubelet[2798]: E1106 00:25:08.052064 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:08.053816 kubelet[2798]: E1106 00:25:08.053764 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:25:08.157505 containerd[1607]: time="2025-11-06T00:25:08.157435957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:25:08.601698 containerd[1607]: time="2025-11-06T00:25:08.601594394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:08.603484 containerd[1607]: time="2025-11-06T00:25:08.603361289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:25:08.603579 containerd[1607]: time="2025-11-06T00:25:08.603522882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:25:08.603849 kubelet[2798]: E1106 00:25:08.603769 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:08.604556 kubelet[2798]: E1106 00:25:08.603864 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:25:08.604556 kubelet[2798]: E1106 00:25:08.604172 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf747a2b409420980f64dd3ca00a319,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:08.608644 containerd[1607]: time="2025-11-06T00:25:08.608555594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:25:09.058421 containerd[1607]: time="2025-11-06T00:25:09.058335627Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:09.060137 containerd[1607]: time="2025-11-06T00:25:09.060075823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:25:09.060261 containerd[1607]: time="2025-11-06T00:25:09.060196519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:25:09.060585 kubelet[2798]: E1106 00:25:09.060496 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:09.061011 kubelet[2798]: E1106 00:25:09.060593 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:25:09.061011 kubelet[2798]: E1106 00:25:09.060870 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:09.062277 kubelet[2798]: E1106 00:25:09.062108 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:25:13.160937 containerd[1607]: time="2025-11-06T00:25:13.160314019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:25:13.579280 containerd[1607]: time="2025-11-06T00:25:13.579026407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:13.580859 containerd[1607]: time="2025-11-06T00:25:13.580757134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:25:13.580859 containerd[1607]: time="2025-11-06T00:25:13.580859516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:13.582248 kubelet[2798]: E1106 00:25:13.581069 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:13.582248 kubelet[2798]: E1106 00:25:13.581135 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:25:13.582248 kubelet[2798]: E1106 00:25:13.581405 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ttfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:13.582874 kubelet[2798]: E1106 00:25:13.582671 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:25:14.157742 kubelet[2798]: E1106 00:25:14.157701 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:25:18.158800 kubelet[2798]: E1106 00:25:18.157523 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:25:20.156802 containerd[1607]: time="2025-11-06T00:25:20.156679749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:25:20.634544 containerd[1607]: time="2025-11-06T00:25:20.634314813Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:25:20.635783 containerd[1607]: time="2025-11-06T00:25:20.635759472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:25:20.635953 containerd[1607]: time="2025-11-06T00:25:20.635851064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:25:20.636275 kubelet[2798]: E1106 00:25:20.636210 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:20.636523 kubelet[2798]: E1106 00:25:20.636295 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:25:20.637051 kubelet[2798]: E1106 00:25:20.636511 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:25:20.637889 kubelet[2798]: E1106 00:25:20.637839 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:25:23.161919 kubelet[2798]: E1106 00:25:23.161652 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:25:23.164846 kubelet[2798]: E1106 00:25:23.162825 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:25:25.161660 kubelet[2798]: E1106 00:25:25.161606 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:25:25.174122 kubelet[2798]: E1106 00:25:25.160985 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:25:32.155916 kubelet[2798]: E1106 00:25:32.155822 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:25:33.157782 kubelet[2798]: E1106 00:25:33.157673 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:25:34.571881 containerd[1607]: time="2025-11-06T00:25:34.571832945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"4eff47e7f9fe11f9cb22fbbf45d98f62a7660017e2b29ddcbe4728af7b953306\" pid:5242 exited_at:{seconds:1762388734 nanos:571247497}" Nov 6 00:25:35.970833 systemd[1]: Started sshd@7-135.181.151.25:22-139.178.68.195:54864.service - OpenSSH per-connection server daemon (139.178.68.195:54864). Nov 6 00:25:37.060059 sshd[5260]: Accepted publickey for core from 139.178.68.195 port 54864 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:25:37.063221 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:37.070093 systemd-logind[1577]: New session 8 of user core. Nov 6 00:25:37.076343 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 00:25:37.168460 kubelet[2798]: E1106 00:25:37.167956 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:25:38.158051 kubelet[2798]: E1106 00:25:38.156819 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:25:38.158051 kubelet[2798]: E1106 00:25:38.157673 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:25:38.511108 sshd[5263]: Connection closed by 139.178.68.195 port 54864 Nov 6 00:25:38.509424 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:38.516015 systemd[1]: sshd@7-135.181.151.25:22-139.178.68.195:54864.service: Deactivated successfully. Nov 6 00:25:38.520498 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 00:25:38.523310 systemd-logind[1577]: Session 8 logged out. Waiting for processes to exit. Nov 6 00:25:38.526392 systemd-logind[1577]: Removed session 8. Nov 6 00:25:40.157464 kubelet[2798]: E1106 00:25:40.157361 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:25:43.157558 kubelet[2798]: E1106 00:25:43.156888 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:25:43.723347 systemd[1]: Started sshd@8-135.181.151.25:22-139.178.68.195:47642.service - OpenSSH per-connection server daemon (139.178.68.195:47642). Nov 6 00:25:44.876876 sshd[5280]: Accepted publickey for core from 139.178.68.195 port 47642 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:25:44.878458 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:44.885932 systemd-logind[1577]: New session 9 of user core. Nov 6 00:25:44.894428 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 00:25:45.832175 sshd[5283]: Connection closed by 139.178.68.195 port 47642 Nov 6 00:25:45.833540 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:45.838889 systemd-logind[1577]: Session 9 logged out. Waiting for processes to exit. Nov 6 00:25:45.839552 systemd[1]: sshd@8-135.181.151.25:22-139.178.68.195:47642.service: Deactivated successfully. Nov 6 00:25:45.844697 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 00:25:45.849534 systemd-logind[1577]: Removed session 9. Nov 6 00:25:46.155588 kubelet[2798]: E1106 00:25:46.155481 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:25:50.987313 systemd[1]: Started sshd@9-135.181.151.25:22-139.178.68.195:47656.service - OpenSSH per-connection server daemon (139.178.68.195:47656). Nov 6 00:25:51.164443 kubelet[2798]: E1106 00:25:51.164377 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:25:52.006513 sshd[5296]: Accepted publickey for core from 139.178.68.195 port 47656 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:25:52.010988 sshd-session[5296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:52.020710 systemd-logind[1577]: New session 10 of user core. Nov 6 00:25:52.029316 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 00:25:52.156834 kubelet[2798]: E1106 00:25:52.156237 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:25:52.799446 sshd[5299]: Connection closed by 139.178.68.195 port 47656 Nov 6 00:25:52.800628 sshd-session[5296]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:52.806497 systemd-logind[1577]: Session 10 logged out. Waiting for processes to exit. Nov 6 00:25:52.807386 systemd[1]: sshd@9-135.181.151.25:22-139.178.68.195:47656.service: Deactivated successfully. Nov 6 00:25:52.810354 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 00:25:52.814446 systemd-logind[1577]: Removed session 10. Nov 6 00:25:52.982773 systemd[1]: Started sshd@10-135.181.151.25:22-139.178.68.195:47670.service - OpenSSH per-connection server daemon (139.178.68.195:47670). Nov 6 00:25:53.158085 kubelet[2798]: E1106 00:25:53.157309 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:25:53.161158 kubelet[2798]: E1106 00:25:53.160854 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:25:54.017708 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 47670 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:25:54.019492 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:54.027175 systemd-logind[1577]: New session 11 of user core. Nov 6 00:25:54.034922 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 00:25:54.837199 sshd[5315]: Connection closed by 139.178.68.195 port 47670 Nov 6 00:25:54.840302 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:54.845606 systemd[1]: sshd@10-135.181.151.25:22-139.178.68.195:47670.service: Deactivated successfully. Nov 6 00:25:54.848669 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 00:25:54.850135 systemd-logind[1577]: Session 11 logged out. Waiting for processes to exit. Nov 6 00:25:54.853930 systemd-logind[1577]: Removed session 11. Nov 6 00:25:55.022416 systemd[1]: Started sshd@11-135.181.151.25:22-139.178.68.195:36022.service - OpenSSH per-connection server daemon (139.178.68.195:36022). Nov 6 00:25:56.069378 sshd[5325]: Accepted publickey for core from 139.178.68.195 port 36022 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:25:56.071430 sshd-session[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:25:56.076326 systemd-logind[1577]: New session 12 of user core. Nov 6 00:25:56.081134 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 00:25:56.156118 kubelet[2798]: E1106 00:25:56.155992 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:25:56.886169 sshd[5334]: Connection closed by 139.178.68.195 port 36022 Nov 6 00:25:56.887861 sshd-session[5325]: pam_unix(sshd:session): session closed for user core Nov 6 00:25:56.893188 systemd-logind[1577]: Session 12 logged out. Waiting for processes to exit. Nov 6 00:25:56.895390 systemd[1]: sshd@11-135.181.151.25:22-139.178.68.195:36022.service: Deactivated successfully. Nov 6 00:25:56.900807 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 00:25:56.906615 systemd-logind[1577]: Removed session 12. Nov 6 00:26:00.156407 kubelet[2798]: E1106 00:26:00.156258 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:26:02.065166 systemd[1]: Started sshd@12-135.181.151.25:22-139.178.68.195:36034.service - OpenSSH per-connection server daemon (139.178.68.195:36034). Nov 6 00:26:03.092997 sshd[5348]: Accepted publickey for core from 139.178.68.195 port 36034 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:03.095506 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:03.104809 systemd-logind[1577]: New session 13 of user core. Nov 6 00:26:03.114680 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 00:26:03.943953 sshd[5351]: Connection closed by 139.178.68.195 port 36034 Nov 6 00:26:03.945677 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:03.959830 systemd[1]: sshd@12-135.181.151.25:22-139.178.68.195:36034.service: Deactivated successfully. Nov 6 00:26:03.963655 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 00:26:03.968125 systemd-logind[1577]: Session 13 logged out. Waiting for processes to exit. Nov 6 00:26:03.970191 systemd-logind[1577]: Removed session 13. Nov 6 00:26:04.155629 kubelet[2798]: E1106 00:26:04.155560 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:26:04.629432 containerd[1607]: time="2025-11-06T00:26:04.629010743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"02e3d39bbcffaf09925a46cdfe30e1f966e2db42141011b26ae46859ae4a7608\" pid:5374 exited_at:{seconds:1762388764 nanos:628687226}" Nov 6 00:26:05.157453 kubelet[2798]: E1106 00:26:05.157305 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:26:06.156235 kubelet[2798]: E1106 00:26:06.156113 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:26:08.156161 kubelet[2798]: E1106 00:26:08.155937 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:26:08.157813 kubelet[2798]: E1106 00:26:08.157747 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:26:09.124147 systemd[1]: Started sshd@13-135.181.151.25:22-139.178.68.195:36156.service - OpenSSH per-connection server daemon (139.178.68.195:36156). Nov 6 00:26:10.170959 sshd[5388]: Accepted publickey for core from 139.178.68.195 port 36156 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:10.170878 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:10.179221 systemd-logind[1577]: New session 14 of user core. Nov 6 00:26:10.186196 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 00:26:10.995628 sshd[5391]: Connection closed by 139.178.68.195 port 36156 Nov 6 00:26:10.997291 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:11.001559 systemd-logind[1577]: Session 14 logged out. Waiting for processes to exit. Nov 6 00:26:11.002104 systemd[1]: sshd@13-135.181.151.25:22-139.178.68.195:36156.service: Deactivated successfully. Nov 6 00:26:11.006127 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 00:26:11.009246 systemd-logind[1577]: Removed session 14. Nov 6 00:26:11.156116 kubelet[2798]: E1106 00:26:11.155922 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:26:16.176429 systemd[1]: Started sshd@14-135.181.151.25:22-139.178.68.195:51954.service - OpenSSH per-connection server daemon (139.178.68.195:51954). Nov 6 00:26:17.200175 sshd[5403]: Accepted publickey for core from 139.178.68.195 port 51954 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:17.202528 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:17.217238 systemd-logind[1577]: New session 15 of user core. Nov 6 00:26:17.224316 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 00:26:18.042954 sshd[5406]: Connection closed by 139.178.68.195 port 51954 Nov 6 00:26:18.047155 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:18.053060 systemd-logind[1577]: Session 15 logged out. Waiting for processes to exit. Nov 6 00:26:18.054756 systemd[1]: sshd@14-135.181.151.25:22-139.178.68.195:51954.service: Deactivated successfully. Nov 6 00:26:18.058681 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 00:26:18.060825 systemd-logind[1577]: Removed session 15. Nov 6 00:26:18.218509 systemd[1]: Started sshd@15-135.181.151.25:22-139.178.68.195:51962.service - OpenSSH per-connection server daemon (139.178.68.195:51962). Nov 6 00:26:19.158345 kubelet[2798]: E1106 00:26:19.158205 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:26:19.158345 kubelet[2798]: E1106 00:26:19.158358 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:26:19.241481 sshd[5418]: Accepted publickey for core from 139.178.68.195 port 51962 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:19.243513 sshd-session[5418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:19.252532 systemd-logind[1577]: New session 16 of user core. Nov 6 00:26:19.261280 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 00:26:20.155372 kubelet[2798]: E1106 00:26:20.155334 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:26:20.156506 kubelet[2798]: E1106 00:26:20.156478 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:26:20.258015 sshd[5421]: Connection closed by 139.178.68.195 port 51962 Nov 6 00:26:20.258897 sshd-session[5418]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:20.262080 systemd-logind[1577]: Session 16 logged out. Waiting for processes to exit. Nov 6 00:26:20.262631 systemd[1]: sshd@15-135.181.151.25:22-139.178.68.195:51962.service: Deactivated successfully. Nov 6 00:26:20.265630 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 00:26:20.268936 systemd-logind[1577]: Removed session 16. Nov 6 00:26:20.473129 systemd[1]: Started sshd@16-135.181.151.25:22-139.178.68.195:51978.service - OpenSSH per-connection server daemon (139.178.68.195:51978). Nov 6 00:26:21.157817 kubelet[2798]: E1106 00:26:21.157447 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:26:21.624490 sshd[5431]: Accepted publickey for core from 139.178.68.195 port 51978 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:21.627110 sshd-session[5431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:21.638185 systemd-logind[1577]: New session 17 of user core. Nov 6 00:26:21.643192 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 00:26:22.158288 kubelet[2798]: E1106 00:26:22.158207 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:26:23.181756 sshd[5435]: Connection closed by 139.178.68.195 port 51978 Nov 6 00:26:23.184119 sshd-session[5431]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:23.187561 systemd[1]: sshd@16-135.181.151.25:22-139.178.68.195:51978.service: Deactivated successfully. Nov 6 00:26:23.187997 systemd-logind[1577]: Session 17 logged out. Waiting for processes to exit. Nov 6 00:26:23.190464 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 00:26:23.193123 systemd-logind[1577]: Removed session 17. Nov 6 00:26:23.340316 systemd[1]: Started sshd@17-135.181.151.25:22-139.178.68.195:34274.service - OpenSSH per-connection server daemon (139.178.68.195:34274). Nov 6 00:26:24.371910 sshd[5453]: Accepted publickey for core from 139.178.68.195 port 34274 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:24.372515 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:24.378586 systemd-logind[1577]: New session 18 of user core. Nov 6 00:26:24.385896 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 00:26:25.415774 sshd[5456]: Connection closed by 139.178.68.195 port 34274 Nov 6 00:26:25.419520 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:25.433531 systemd-logind[1577]: Session 18 logged out. Waiting for processes to exit. Nov 6 00:26:25.433624 systemd[1]: sshd@17-135.181.151.25:22-139.178.68.195:34274.service: Deactivated successfully. Nov 6 00:26:25.437957 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 00:26:25.441911 systemd-logind[1577]: Removed session 18. Nov 6 00:26:25.630709 systemd[1]: Started sshd@18-135.181.151.25:22-139.178.68.195:34290.service - OpenSSH per-connection server daemon (139.178.68.195:34290). Nov 6 00:26:26.806091 sshd[5466]: Accepted publickey for core from 139.178.68.195 port 34290 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:26.810002 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:26.817422 systemd-logind[1577]: New session 19 of user core. Nov 6 00:26:26.826386 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 00:26:27.741523 sshd[5469]: Connection closed by 139.178.68.195 port 34290 Nov 6 00:26:27.740510 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:27.745730 systemd[1]: sshd@18-135.181.151.25:22-139.178.68.195:34290.service: Deactivated successfully. Nov 6 00:26:27.747840 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 00:26:27.748727 systemd-logind[1577]: Session 19 logged out. Waiting for processes to exit. Nov 6 00:26:27.750798 systemd-logind[1577]: Removed session 19. Nov 6 00:26:32.156569 containerd[1607]: time="2025-11-06T00:26:32.156499879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 00:26:32.625902 containerd[1607]: time="2025-11-06T00:26:32.625831117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:32.628306 containerd[1607]: time="2025-11-06T00:26:32.628206717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 00:26:32.628306 containerd[1607]: time="2025-11-06T00:26:32.628257613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 6 00:26:32.628727 kubelet[2798]: E1106 00:26:32.628653 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:26:32.629352 kubelet[2798]: E1106 00:26:32.628706 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 00:26:32.629352 kubelet[2798]: E1106 00:26:32.629279 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9cf747a2b409420980f64dd3ca00a319,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:32.631547 containerd[1607]: time="2025-11-06T00:26:32.631487386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 00:26:32.895145 systemd[1]: Started sshd@19-135.181.151.25:22-139.178.68.195:34294.service - OpenSSH per-connection server daemon (139.178.68.195:34294). Nov 6 00:26:33.057345 containerd[1607]: time="2025-11-06T00:26:33.057273285Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:33.058926 containerd[1607]: time="2025-11-06T00:26:33.058842862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 00:26:33.058983 containerd[1607]: time="2025-11-06T00:26:33.058961044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 6 00:26:33.059345 kubelet[2798]: E1106 00:26:33.059288 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:26:33.059401 kubelet[2798]: E1106 00:26:33.059358 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 00:26:33.059856 kubelet[2798]: E1106 00:26:33.059669 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kgx74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b54cc9969-225cb_calico-system(9cab2cb0-7aac-4257-baff-c860234a94ee): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:33.061044 kubelet[2798]: E1106 00:26:33.060978 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:26:33.163280 containerd[1607]: time="2025-11-06T00:26:33.162265105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 00:26:33.173305 kubelet[2798]: E1106 00:26:33.173269 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:26:33.584223 containerd[1607]: time="2025-11-06T00:26:33.583713284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:33.586609 containerd[1607]: time="2025-11-06T00:26:33.586492432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 00:26:33.586609 containerd[1607]: time="2025-11-06T00:26:33.586577041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 6 00:26:33.587027 kubelet[2798]: E1106 00:26:33.586974 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:26:33.587317 kubelet[2798]: E1106 00:26:33.587253 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 00:26:33.587989 kubelet[2798]: E1106 00:26:33.587902 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mrpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f86cdd547-wv29n_calico-system(cc4a9674-9eca-4968-950d-28ec9c7b89e9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:33.591489 kubelet[2798]: E1106 00:26:33.590368 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:26:33.916719 sshd[5491]: Accepted publickey for core from 139.178.68.195 port 34294 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:33.918540 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:33.924869 systemd-logind[1577]: New session 20 of user core. Nov 6 00:26:33.930156 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 00:26:34.157620 containerd[1607]: time="2025-11-06T00:26:34.157548405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 00:26:34.587477 containerd[1607]: time="2025-11-06T00:26:34.587323298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:34.589589 containerd[1607]: time="2025-11-06T00:26:34.589490046Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 00:26:34.589589 containerd[1607]: time="2025-11-06T00:26:34.589561380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 6 00:26:34.589848 kubelet[2798]: E1106 00:26:34.589801 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:26:34.590044 kubelet[2798]: E1106 00:26:34.589856 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 00:26:34.590044 kubelet[2798]: E1106 00:26:34.589962 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:34.592325 containerd[1607]: time="2025-11-06T00:26:34.592267530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 00:26:34.607531 containerd[1607]: time="2025-11-06T00:26:34.607480008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c91c85d8351290f96332d9e7b27b268839591215f42ec35bccf77161928ab3bb\" id:\"0213d606b9924dfd99b21671d2c3c31c0a1c5e1daee2dbe55cb53d91868eb083\" pid:5513 exited_at:{seconds:1762388794 nanos:606956976}" Nov 6 00:26:34.709233 sshd[5494]: Connection closed by 139.178.68.195 port 34294 Nov 6 00:26:34.711095 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:34.717264 systemd[1]: sshd@19-135.181.151.25:22-139.178.68.195:34294.service: Deactivated successfully. Nov 6 00:26:34.720860 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 00:26:34.723133 systemd-logind[1577]: Session 20 logged out. Waiting for processes to exit. Nov 6 00:26:34.724941 systemd-logind[1577]: Removed session 20. Nov 6 00:26:35.035345 containerd[1607]: time="2025-11-06T00:26:35.035278902Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:35.036957 containerd[1607]: time="2025-11-06T00:26:35.036910914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 00:26:35.037082 containerd[1607]: time="2025-11-06T00:26:35.037005903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 6 00:26:35.037260 kubelet[2798]: E1106 00:26:35.037220 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:26:35.037335 kubelet[2798]: E1106 00:26:35.037279 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 00:26:35.037546 kubelet[2798]: E1106 00:26:35.037456 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdqvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dzjlj_calico-system(f296fc03-b516-4c28-a887-9cf8255c6651): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:35.039084 kubelet[2798]: E1106 00:26:35.038984 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:26:36.156527 kubelet[2798]: E1106 00:26:36.156476 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:26:37.160463 containerd[1607]: time="2025-11-06T00:26:37.160414591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:26:37.595026 containerd[1607]: time="2025-11-06T00:26:37.594867383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:37.596320 containerd[1607]: time="2025-11-06T00:26:37.596242755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:26:37.596320 containerd[1607]: time="2025-11-06T00:26:37.596305823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:26:37.596722 kubelet[2798]: E1106 00:26:37.596465 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:37.596722 kubelet[2798]: E1106 00:26:37.596512 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:37.596722 kubelet[2798]: E1106 00:26:37.596631 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g6mw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-5fgqv_calico-apiserver(a68094c7-e135-44b6-9a5d-a63247f50c8f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:37.598088 kubelet[2798]: E1106 00:26:37.598009 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:26:39.889254 systemd[1]: Started sshd@20-135.181.151.25:22-139.178.68.195:34520.service - OpenSSH per-connection server daemon (139.178.68.195:34520). Nov 6 00:26:40.920328 sshd[5532]: Accepted publickey for core from 139.178.68.195 port 34520 ssh2: RSA SHA256:KZ+lWacUXVipzsoQlZVEjNHZCpteqiG39KnpC+S7Ns8 Nov 6 00:26:40.923955 sshd-session[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 00:26:40.933489 systemd-logind[1577]: New session 21 of user core. Nov 6 00:26:40.939236 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 00:26:41.695023 sshd[5535]: Connection closed by 139.178.68.195 port 34520 Nov 6 00:26:41.695564 sshd-session[5532]: pam_unix(sshd:session): session closed for user core Nov 6 00:26:41.702408 systemd-logind[1577]: Session 21 logged out. Waiting for processes to exit. Nov 6 00:26:41.702514 systemd[1]: sshd@20-135.181.151.25:22-139.178.68.195:34520.service: Deactivated successfully. Nov 6 00:26:41.705994 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 00:26:41.711772 systemd-logind[1577]: Removed session 21. Nov 6 00:26:46.160080 containerd[1607]: time="2025-11-06T00:26:46.159747743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 00:26:46.161705 kubelet[2798]: E1106 00:26:46.160456 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:26:46.161705 kubelet[2798]: E1106 00:26:46.160597 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9" Nov 6 00:26:46.626901 containerd[1607]: time="2025-11-06T00:26:46.626825973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:46.628676 containerd[1607]: time="2025-11-06T00:26:46.628599102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 00:26:46.629000 containerd[1607]: time="2025-11-06T00:26:46.628732221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 6 00:26:46.629100 kubelet[2798]: E1106 00:26:46.628929 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:46.629100 kubelet[2798]: E1106 00:26:46.628997 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 00:26:46.629937 kubelet[2798]: E1106 00:26:46.629825 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4ttfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5c874f48d-mrqff_calico-apiserver(e926a82a-b4ef-430a-95dd-9253d2a0007c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:46.631220 kubelet[2798]: E1106 00:26:46.631152 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:26:47.158920 kubelet[2798]: E1106 00:26:47.158842 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:26:48.155701 containerd[1607]: time="2025-11-06T00:26:48.155371626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 00:26:48.590604 containerd[1607]: time="2025-11-06T00:26:48.590537912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 00:26:48.592293 containerd[1607]: time="2025-11-06T00:26:48.592209037Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 00:26:48.592640 containerd[1607]: time="2025-11-06T00:26:48.592213386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 6 00:26:48.592707 kubelet[2798]: E1106 00:26:48.592668 2798 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:26:48.593249 kubelet[2798]: E1106 00:26:48.592720 2798 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 00:26:48.593249 kubelet[2798]: E1106 00:26:48.592900 2798 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4sh8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-j4x55_calico-system(7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 00:26:48.594994 kubelet[2798]: E1106 00:26:48.594937 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-j4x55" podUID="7a7f8ed4-e4ea-4ce8-94e8-d2e127cd989a" Nov 6 00:26:49.155005 kubelet[2798]: E1106 00:26:49.154909 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:26:57.182067 kubelet[2798]: E1106 00:26:57.138233 2798 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51736->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-5c874f48d-mrqff.1875431cc285e6da calico-apiserver 1342 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-5c874f48d-mrqff,UID:e926a82a-b4ef-430a-95dd-9253d2a0007c,APIVersion:v1,ResourceVersion:799,FieldPath:spec.containers{calico-apiserver},},Reason:Failed,Message:Failed to pull image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found,Source:EventSource{Component:kubelet,Host:ci-4459-1-0-n-bff22aa786,},FirstTimestamp:2025-11-06 00:23:39 +0000 UTC,LastTimestamp:2025-11-06 00:26:46.629091646 +0000 UTC m=+229.600925519,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-1-0-n-bff22aa786,}" Nov 6 00:26:57.418956 kubelet[2798]: I1106 00:26:57.418849 2798 status_manager.go:895] "Failed to get status for pod" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" pod="calico-system/whisker-5b54cc9969-225cb" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51830->10.0.0.2:2379: read: connection timed out" Nov 6 00:26:57.661071 kubelet[2798]: E1106 00:26:57.660932 2798 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:51920->10.0.0.2:2379: read: connection timed out" Nov 6 00:26:57.664522 systemd[1]: cri-containerd-a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e.scope: Deactivated successfully. Nov 6 00:26:57.665087 systemd[1]: cri-containerd-a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e.scope: Consumed 2.740s CPU time, 42M memory peak, 35.6M read from disk. Nov 6 00:26:57.672628 containerd[1607]: time="2025-11-06T00:26:57.672561584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\" id:\"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\" pid:2643 exit_status:1 exited_at:{seconds:1762388817 nanos:671852502}" Nov 6 00:26:57.673207 containerd[1607]: time="2025-11-06T00:26:57.672642014Z" level=info msg="received exit event container_id:\"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\" id:\"a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e\" pid:2643 exit_status:1 exited_at:{seconds:1762388817 nanos:671852502}" Nov 6 00:26:57.693696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e-rootfs.mount: Deactivated successfully. Nov 6 00:26:57.914441 containerd[1607]: time="2025-11-06T00:26:57.914282424Z" level=info msg="received exit event container_id:\"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\" id:\"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\" pid:3141 exit_status:1 exited_at:{seconds:1762388817 nanos:913851686}" Nov 6 00:26:57.915808 containerd[1607]: time="2025-11-06T00:26:57.914670111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\" id:\"0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111\" pid:3141 exit_status:1 exited_at:{seconds:1762388817 nanos:913851686}" Nov 6 00:26:57.914796 systemd[1]: cri-containerd-0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111.scope: Deactivated successfully. Nov 6 00:26:57.916369 systemd[1]: cri-containerd-0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111.scope: Consumed 35.287s CPU time, 115.3M memory peak, 37.2M read from disk. Nov 6 00:26:57.954229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111-rootfs.mount: Deactivated successfully. Nov 6 00:26:58.091924 kubelet[2798]: I1106 00:26:58.091660 2798 scope.go:117] "RemoveContainer" containerID="0f8de7fe48c5ec050866f846bc35b736a85eb9c2f526efecc7fc147168743111" Nov 6 00:26:58.091924 kubelet[2798]: I1106 00:26:58.091747 2798 scope.go:117] "RemoveContainer" containerID="a3a797f4baf289a746dc68c446171873ed6ee82fd97f7fe31d01c4f23d05292e" Nov 6 00:26:58.094681 containerd[1607]: time="2025-11-06T00:26:58.094637937Z" level=info msg="CreateContainer within sandbox \"2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 6 00:26:58.095184 containerd[1607]: time="2025-11-06T00:26:58.095131573Z" level=info msg="CreateContainer within sandbox \"7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 6 00:26:58.113525 containerd[1607]: time="2025-11-06T00:26:58.112715357Z" level=info msg="Container 9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:58.124438 containerd[1607]: time="2025-11-06T00:26:58.124193089Z" level=info msg="received exit event container_id:\"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\" id:\"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\" pid:2625 exit_status:1 exited_at:{seconds:1762388818 nanos:122734001}" Nov 6 00:26:58.127232 containerd[1607]: time="2025-11-06T00:26:58.127193601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\" id:\"e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830\" pid:2625 exit_status:1 exited_at:{seconds:1762388818 nanos:122734001}" Nov 6 00:26:58.137595 systemd[1]: cri-containerd-e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830.scope: Deactivated successfully. Nov 6 00:26:58.138207 systemd[1]: cri-containerd-e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830.scope: Consumed 5.212s CPU time, 84.2M memory peak, 54M read from disk. Nov 6 00:26:58.146661 containerd[1607]: time="2025-11-06T00:26:58.146355637Z" level=info msg="Container 105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:58.148947 containerd[1607]: time="2025-11-06T00:26:58.148920089Z" level=info msg="CreateContainer within sandbox \"7ec239cf42e19c881a0ef8020141b15700323a4a764212acccdf00e94d1c7287\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e\"" Nov 6 00:26:58.160740 kubelet[2798]: E1106 00:26:58.160642 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-mrqff" podUID="e926a82a-b4ef-430a-95dd-9253d2a0007c" Nov 6 00:26:58.163871 kubelet[2798]: E1106 00:26:58.163787 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b54cc9969-225cb" podUID="9cab2cb0-7aac-4257-baff-c860234a94ee" Nov 6 00:26:58.165130 containerd[1607]: time="2025-11-06T00:26:58.164753377Z" level=info msg="StartContainer for \"9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e\"" Nov 6 00:26:58.167226 containerd[1607]: time="2025-11-06T00:26:58.166492441Z" level=info msg="CreateContainer within sandbox \"2266d6f68bd226e0a50834b229c933790d53ea5662b50a2e99be5fbf3dff31ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf\"" Nov 6 00:26:58.167434 containerd[1607]: time="2025-11-06T00:26:58.167395055Z" level=info msg="StartContainer for \"105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf\"" Nov 6 00:26:58.168203 containerd[1607]: time="2025-11-06T00:26:58.168182363Z" level=info msg="connecting to shim 9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e" address="unix:///run/containerd/s/7ebe8692df3c36d9b51257d942f3e5a7abf821abfe7e1fa7116cff4392e861ef" protocol=ttrpc version=3 Nov 6 00:26:58.168654 containerd[1607]: time="2025-11-06T00:26:58.168623681Z" level=info msg="connecting to shim 105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf" address="unix:///run/containerd/s/e692baeb3420f56337baacb24cbb562038c92f296f94ddbe00e51e214034ddc7" protocol=ttrpc version=3 Nov 6 00:26:58.197212 systemd[1]: Started cri-containerd-9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e.scope - libcontainer container 9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e. Nov 6 00:26:58.200726 systemd[1]: Started cri-containerd-105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf.scope - libcontainer container 105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf. Nov 6 00:26:58.268723 containerd[1607]: time="2025-11-06T00:26:58.268671832Z" level=info msg="StartContainer for \"9a87e73072568e6994ffb84b1dacbb59f2fb7a8a94e677116c865fa11c8c0e7e\" returns successfully" Nov 6 00:26:58.272209 containerd[1607]: time="2025-11-06T00:26:58.272147225Z" level=info msg="StartContainer for \"105e79d683059319dbc88b0426f0f21b9bf9711716bb30b8617c49588a2071cf\" returns successfully" Nov 6 00:26:58.697026 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830-rootfs.mount: Deactivated successfully. Nov 6 00:26:59.099674 kubelet[2798]: I1106 00:26:59.099193 2798 scope.go:117] "RemoveContainer" containerID="e2881ecb5a85bc722a85b66a498110ab29e6f61ea8343ea533872335ac23a830" Nov 6 00:26:59.102063 containerd[1607]: time="2025-11-06T00:26:59.101859850Z" level=info msg="CreateContainer within sandbox \"8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 6 00:26:59.115059 containerd[1607]: time="2025-11-06T00:26:59.114456232Z" level=info msg="Container 04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b: CDI devices from CRI Config.CDIDevices: []" Nov 6 00:26:59.126278 containerd[1607]: time="2025-11-06T00:26:59.126241933Z" level=info msg="CreateContainer within sandbox \"8e0359065bb0f8857b6b4bcd81c1d3c6c7ae74815918fbc82e641f1472c921b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b\"" Nov 6 00:26:59.126740 containerd[1607]: time="2025-11-06T00:26:59.126713146Z" level=info msg="StartContainer for \"04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b\"" Nov 6 00:26:59.128641 containerd[1607]: time="2025-11-06T00:26:59.128611881Z" level=info msg="connecting to shim 04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b" address="unix:///run/containerd/s/c8265540af6fb7b1006f1bf13343d6459369071c9bf03a7d18f1fafc8ece9153" protocol=ttrpc version=3 Nov 6 00:26:59.171150 systemd[1]: Started cri-containerd-04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b.scope - libcontainer container 04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b. Nov 6 00:26:59.229659 containerd[1607]: time="2025-11-06T00:26:59.229615924Z" level=info msg="StartContainer for \"04d49ccc4a664899da92cda54199d1cfacb61d982a6503213a8dd5641216802b\" returns successfully" Nov 6 00:27:00.156750 kubelet[2798]: E1106 00:27:00.156668 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dzjlj" podUID="f296fc03-b516-4c28-a887-9cf8255c6651" Nov 6 00:27:01.156481 kubelet[2798]: E1106 00:27:01.156276 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c874f48d-5fgqv" podUID="a68094c7-e135-44b6-9a5d-a63247f50c8f" Nov 6 00:27:01.156481 kubelet[2798]: E1106 00:27:01.156402 2798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7f86cdd547-wv29n" podUID="cc4a9674-9eca-4968-950d-28ec9c7b89e9"