Nov 5 15:50:49.321140 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:50:49.321230 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:50:49.321247 kernel: BIOS-provided physical RAM map: Nov 5 15:50:49.321256 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 15:50:49.321264 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 15:50:49.321273 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 15:50:49.321282 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007cfdbfff] usable Nov 5 15:50:49.321291 kernel: BIOS-e820: [mem 0x000000007cfdc000-0x000000007cffffff] reserved Nov 5 15:50:49.321302 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 15:50:49.321310 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 15:50:49.321319 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:50:49.321327 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 15:50:49.321335 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:50:49.321344 kernel: NX (Execute Disable) protection: active Nov 5 15:50:49.321357 kernel: APIC: Static calls initialized Nov 5 15:50:49.321366 kernel: SMBIOS 3.0.0 present. Nov 5 15:50:49.321375 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Nov 5 15:50:49.321384 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:50:49.321393 kernel: Hypervisor detected: KVM Nov 5 15:50:49.321402 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 5 15:50:49.321411 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:50:49.321419 kernel: kvm-clock: using sched offset of 3476700072 cycles Nov 5 15:50:49.321430 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:50:49.321442 kernel: tsc: Detected 2399.998 MHz processor Nov 5 15:50:49.321452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:50:49.321462 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:50:49.321472 kernel: last_pfn = 0x7cfdc max_arch_pfn = 0x400000000 Nov 5 15:50:49.321481 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 15:50:49.321491 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:50:49.321500 kernel: Using GB pages for direct mapping Nov 5 15:50:49.321513 kernel: ACPI: Early table checksum verification disabled Nov 5 15:50:49.321522 kernel: ACPI: RSDP 0x00000000000F5270 000014 (v00 BOCHS ) Nov 5 15:50:49.321532 kernel: ACPI: RSDT 0x000000007CFE2693 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321541 kernel: ACPI: FACP 0x000000007CFE2483 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321551 kernel: ACPI: DSDT 0x000000007CFE0040 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321561 kernel: ACPI: FACS 0x000000007CFE0000 000040 Nov 5 15:50:49.321570 kernel: ACPI: APIC 0x000000007CFE2577 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321582 kernel: ACPI: HPET 0x000000007CFE25F7 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321592 kernel: ACPI: MCFG 0x000000007CFE262F 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321601 kernel: ACPI: WAET 0x000000007CFE266B 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:50:49.321615 kernel: ACPI: Reserving FACP table memory at [mem 0x7cfe2483-0x7cfe2576] Nov 5 15:50:49.321625 kernel: ACPI: Reserving DSDT table memory at [mem 0x7cfe0040-0x7cfe2482] Nov 5 15:50:49.321637 kernel: ACPI: Reserving FACS table memory at [mem 0x7cfe0000-0x7cfe003f] Nov 5 15:50:49.321647 kernel: ACPI: Reserving APIC table memory at [mem 0x7cfe2577-0x7cfe25f6] Nov 5 15:50:49.321656 kernel: ACPI: Reserving HPET table memory at [mem 0x7cfe25f7-0x7cfe262e] Nov 5 15:50:49.321666 kernel: ACPI: Reserving MCFG table memory at [mem 0x7cfe262f-0x7cfe266a] Nov 5 15:50:49.321676 kernel: ACPI: Reserving WAET table memory at [mem 0x7cfe266b-0x7cfe2692] Nov 5 15:50:49.321685 kernel: No NUMA configuration found Nov 5 15:50:49.321707 kernel: Faking a node at [mem 0x0000000000000000-0x000000007cfdbfff] Nov 5 15:50:49.321719 kernel: NODE_DATA(0) allocated [mem 0x7cfd4dc0-0x7cfdbfff] Nov 5 15:50:49.321728 kernel: Zone ranges: Nov 5 15:50:49.321738 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:50:49.321748 kernel: DMA32 [mem 0x0000000001000000-0x000000007cfdbfff] Nov 5 15:50:49.321758 kernel: Normal empty Nov 5 15:50:49.321767 kernel: Device empty Nov 5 15:50:49.321777 kernel: Movable zone start for each node Nov 5 15:50:49.321789 kernel: Early memory node ranges Nov 5 15:50:49.321799 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 15:50:49.321808 kernel: node 0: [mem 0x0000000000100000-0x000000007cfdbfff] Nov 5 15:50:49.321818 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007cfdbfff] Nov 5 15:50:49.321828 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:50:49.321838 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 15:50:49.321847 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 5 15:50:49.321857 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:50:49.321899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:50:49.321910 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:50:49.321920 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:50:49.321929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:50:49.321939 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:50:49.321949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:50:49.321959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:50:49.321972 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:50:49.321981 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:50:49.321991 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:50:49.322001 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:50:49.322011 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:50:49.322021 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:50:49.322030 kernel: CPU topo: Num. cores per package: 2 Nov 5 15:50:49.322042 kernel: CPU topo: Num. threads per package: 2 Nov 5 15:50:49.322052 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Nov 5 15:50:49.322061 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:50:49.322071 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 15:50:49.322081 kernel: Booting paravirtualized kernel on KVM Nov 5 15:50:49.322091 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:50:49.322101 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 5 15:50:49.322111 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Nov 5 15:50:49.322123 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Nov 5 15:50:49.322133 kernel: pcpu-alloc: [0] 0 1 Nov 5 15:50:49.322143 kernel: kvm-guest: PV spinlocks disabled, no host support Nov 5 15:50:49.322154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:50:49.322865 kernel: random: crng init done Nov 5 15:50:49.322879 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:50:49.322895 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 5 15:50:49.322905 kernel: Fallback order for Node 0: 0 Nov 5 15:50:49.322915 kernel: Built 1 zonelists, mobility grouping on. Total pages: 511866 Nov 5 15:50:49.322925 kernel: Policy zone: DMA32 Nov 5 15:50:49.322935 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:50:49.322945 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:50:49.322955 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:50:49.322965 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:50:49.322978 kernel: Dynamic Preempt: voluntary Nov 5 15:50:49.322988 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:50:49.323000 kernel: rcu: RCU event tracing is enabled. Nov 5 15:50:49.323010 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:50:49.323021 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:50:49.323031 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:50:49.323042 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:50:49.323054 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:50:49.323064 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:50:49.323074 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:50:49.323085 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:50:49.323095 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:50:49.323105 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 5 15:50:49.323115 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:50:49.323127 kernel: Console: colour VGA+ 80x25 Nov 5 15:50:49.323137 kernel: printk: legacy console [tty0] enabled Nov 5 15:50:49.323147 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:50:49.323157 kernel: ACPI: Core revision 20240827 Nov 5 15:50:49.323241 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:50:49.323254 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:50:49.323264 kernel: x2apic enabled Nov 5 15:50:49.323275 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:50:49.323285 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:50:49.323295 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Nov 5 15:50:49.323308 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Nov 5 15:50:49.323319 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:50:49.323329 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 15:50:49.323341 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 15:50:49.323352 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:50:49.323362 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:50:49.323372 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:50:49.323382 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 15:50:49.323392 kernel: active return thunk: retbleed_return_thunk Nov 5 15:50:49.323403 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 15:50:49.323415 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:50:49.323425 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:50:49.323436 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:50:49.323446 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:50:49.323456 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:50:49.323466 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:50:49.323477 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 15:50:49.323489 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:50:49.323499 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:50:49.323509 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:50:49.323520 kernel: landlock: Up and running. Nov 5 15:50:49.323530 kernel: SELinux: Initializing. Nov 5 15:50:49.323540 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:50:49.323550 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 5 15:50:49.323563 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 15:50:49.323573 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 15:50:49.323583 kernel: ... version: 0 Nov 5 15:50:49.323593 kernel: ... bit width: 48 Nov 5 15:50:49.323603 kernel: ... generic registers: 6 Nov 5 15:50:49.323613 kernel: ... value mask: 0000ffffffffffff Nov 5 15:50:49.323624 kernel: ... max period: 00007fffffffffff Nov 5 15:50:49.323634 kernel: ... fixed-purpose events: 0 Nov 5 15:50:49.323646 kernel: ... event mask: 000000000000003f Nov 5 15:50:49.323656 kernel: signal: max sigframe size: 1776 Nov 5 15:50:49.323666 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:50:49.323677 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:50:49.323687 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:50:49.323708 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:50:49.323718 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:50:49.323730 kernel: .... node #0, CPUs: #1 Nov 5 15:50:49.323740 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:50:49.323751 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Nov 5 15:50:49.323762 kernel: Memory: 1940304K/2047464K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 102616K reserved, 0K cma-reserved) Nov 5 15:50:49.323773 kernel: devtmpfs: initialized Nov 5 15:50:49.323783 kernel: x86/mm: Memory block size: 128MB Nov 5 15:50:49.323793 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:50:49.323806 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:50:49.323816 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:50:49.323826 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:50:49.323836 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:50:49.323847 kernel: audit: type=2000 audit(1762357846.224:1): state=initialized audit_enabled=0 res=1 Nov 5 15:50:49.323857 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:50:49.323867 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:50:49.323879 kernel: cpuidle: using governor menu Nov 5 15:50:49.323890 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:50:49.323900 kernel: dca service started, version 1.12.1 Nov 5 15:50:49.323910 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 15:50:49.323920 kernel: PCI: Using configuration type 1 for base access Nov 5 15:50:49.323930 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:50:49.323941 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:50:49.323953 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:50:49.323963 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:50:49.323974 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:50:49.323984 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:50:49.323994 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:50:49.324004 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:50:49.324014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:50:49.324027 kernel: ACPI: Interpreter enabled Nov 5 15:50:49.324037 kernel: ACPI: PM: (supports S0 S5) Nov 5 15:50:49.324047 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:50:49.324057 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:50:49.324068 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:50:49.324078 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 15:50:49.324088 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:50:49.325355 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:50:49.325498 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 15:50:49.325617 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 15:50:49.325632 kernel: PCI host bridge to bus 0000:00 Nov 5 15:50:49.325770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:50:49.325882 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:50:49.325983 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:50:49.326082 kernel: pci_bus 0000:00: root bus resource [mem 0x7d000000-0xafffffff window] Nov 5 15:50:49.326201 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 15:50:49.326304 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 5 15:50:49.326403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:50:49.326545 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:50:49.326674 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:50:49.326806 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfb800000-0xfbffffff pref] Nov 5 15:50:49.326920 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfd200000-0xfd203fff 64bit pref] Nov 5 15:50:49.327037 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff] Nov 5 15:50:49.327147 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref] Nov 5 15:50:49.332325 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:50:49.332445 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.335334 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff] Nov 5 15:50:49.335437 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 5 15:50:49.335511 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 5 15:50:49.335577 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 5 15:50:49.335644 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.335715 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff] Nov 5 15:50:49.335773 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 5 15:50:49.335863 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 5 15:50:49.335951 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 5 15:50:49.336048 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.336110 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff] Nov 5 15:50:49.336671 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 5 15:50:49.336759 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 5 15:50:49.336819 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 5 15:50:49.336885 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.336947 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff] Nov 5 15:50:49.337004 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 5 15:50:49.337061 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 5 15:50:49.337119 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 5 15:50:49.337261 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.337328 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff] Nov 5 15:50:49.337389 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 5 15:50:49.337447 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 5 15:50:49.337505 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 5 15:50:49.337568 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.337624 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff] Nov 5 15:50:49.337679 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 5 15:50:49.337752 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 5 15:50:49.337808 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 5 15:50:49.337870 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.337926 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff] Nov 5 15:50:49.337990 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 5 15:50:49.338059 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 5 15:50:49.338120 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 5 15:50:49.338236 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.338350 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff] Nov 5 15:50:49.338417 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 5 15:50:49.338480 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 5 15:50:49.338540 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 5 15:50:49.338604 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 5 15:50:49.338679 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff] Nov 5 15:50:49.338777 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 5 15:50:49.338836 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 5 15:50:49.338893 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 5 15:50:49.338958 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:50:49.339014 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 15:50:49.339077 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 15:50:49.339159 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc040-0xc05f] Nov 5 15:50:49.339274 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea1a000-0xfea1afff] Nov 5 15:50:49.339351 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 15:50:49.339410 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 15:50:49.339476 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 5 15:50:49.339537 kernel: pci 0000:01:00.0: BAR 1 [mem 0xfe880000-0xfe880fff] Nov 5 15:50:49.339597 kernel: pci 0000:01:00.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Nov 5 15:50:49.339656 kernel: pci 0000:01:00.0: ROM [mem 0xfe800000-0xfe87ffff pref] Nov 5 15:50:49.339724 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 5 15:50:49.339792 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 5 15:50:49.339850 kernel: pci 0000:02:00.0: BAR 0 [mem 0xfe600000-0xfe603fff 64bit] Nov 5 15:50:49.339909 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 5 15:50:49.339975 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Nov 5 15:50:49.340036 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe400000-0xfe400fff] Nov 5 15:50:49.340094 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfcc00000-0xfcc03fff 64bit pref] Nov 5 15:50:49.340281 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 5 15:50:49.340365 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Nov 5 15:50:49.340428 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Nov 5 15:50:49.340488 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 5 15:50:49.340562 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 5 15:50:49.340623 kernel: pci 0000:05:00.0: BAR 1 [mem 0xfe000000-0xfe000fff] Nov 5 15:50:49.340680 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfc800000-0xfc803fff 64bit pref] Nov 5 15:50:49.340748 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 5 15:50:49.340813 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Nov 5 15:50:49.340874 kernel: pci 0000:06:00.0: BAR 1 [mem 0xfde00000-0xfde00fff] Nov 5 15:50:49.340932 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfc600000-0xfc603fff 64bit pref] Nov 5 15:50:49.340993 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 5 15:50:49.341001 kernel: acpiphp: Slot [0] registered Nov 5 15:50:49.341067 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Nov 5 15:50:49.341141 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfdc80000-0xfdc80fff] Nov 5 15:50:49.341246 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfc400000-0xfc403fff 64bit pref] Nov 5 15:50:49.341332 kernel: pci 0000:07:00.0: ROM [mem 0xfdc00000-0xfdc7ffff pref] Nov 5 15:50:49.341408 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 5 15:50:49.341419 kernel: acpiphp: Slot [0-2] registered Nov 5 15:50:49.341498 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 5 15:50:49.341510 kernel: acpiphp: Slot [0-3] registered Nov 5 15:50:49.341603 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 5 15:50:49.341615 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:50:49.341624 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:50:49.341635 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:50:49.341646 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:50:49.341656 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 15:50:49.341668 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 15:50:49.341682 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 15:50:49.341690 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 15:50:49.341708 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 15:50:49.341716 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 15:50:49.341724 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 15:50:49.341732 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 15:50:49.341740 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 15:50:49.341750 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 15:50:49.341758 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 15:50:49.341767 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 15:50:49.341775 kernel: iommu: Default domain type: Translated Nov 5 15:50:49.341784 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:50:49.341793 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:50:49.341802 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:50:49.341812 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 15:50:49.341821 kernel: e820: reserve RAM buffer [mem 0x7cfdc000-0x7fffffff] Nov 5 15:50:49.341924 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 15:50:49.342017 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 15:50:49.342100 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:50:49.342109 kernel: vgaarb: loaded Nov 5 15:50:49.342118 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:50:49.342130 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:50:49.342138 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:50:49.342147 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:50:49.342156 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:50:49.342184 kernel: pnp: PnP ACPI init Nov 5 15:50:49.342302 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 15:50:49.342322 kernel: pnp: PnP ACPI: found 5 devices Nov 5 15:50:49.342339 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:50:49.342350 kernel: NET: Registered PF_INET protocol family Nov 5 15:50:49.343349 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:50:49.343455 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 5 15:50:49.343476 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:50:49.343619 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 5 15:50:49.343631 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 5 15:50:49.343641 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 5 15:50:49.343646 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:50:49.343652 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 5 15:50:49.343657 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:50:49.343663 kernel: NET: Registered PF_XDP protocol family Nov 5 15:50:49.343764 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 5 15:50:49.343830 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 5 15:50:49.343893 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 5 15:50:49.343951 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Nov 5 15:50:49.344008 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Nov 5 15:50:49.344120 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Nov 5 15:50:49.344250 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Nov 5 15:50:49.344318 kernel: pci 0000:00:02.0: bridge window [mem 0xfe800000-0xfe9fffff] Nov 5 15:50:49.344377 kernel: pci 0000:00:02.0: bridge window [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 5 15:50:49.344441 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Nov 5 15:50:49.344498 kernel: pci 0000:00:02.1: bridge window [mem 0xfe600000-0xfe7fffff] Nov 5 15:50:49.344556 kernel: pci 0000:00:02.1: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 5 15:50:49.344616 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Nov 5 15:50:49.344672 kernel: pci 0000:00:02.2: bridge window [mem 0xfe400000-0xfe5fffff] Nov 5 15:50:49.344739 kernel: pci 0000:00:02.2: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 5 15:50:49.344798 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Nov 5 15:50:49.344857 kernel: pci 0000:00:02.3: bridge window [mem 0xfe200000-0xfe3fffff] Nov 5 15:50:49.344916 kernel: pci 0000:00:02.3: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 5 15:50:49.344975 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Nov 5 15:50:49.345032 kernel: pci 0000:00:02.4: bridge window [mem 0xfe000000-0xfe1fffff] Nov 5 15:50:49.345087 kernel: pci 0000:00:02.4: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 5 15:50:49.345147 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Nov 5 15:50:49.345232 kernel: pci 0000:00:02.5: bridge window [mem 0xfde00000-0xfdffffff] Nov 5 15:50:49.345290 kernel: pci 0000:00:02.5: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 5 15:50:49.345347 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Nov 5 15:50:49.345403 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Nov 5 15:50:49.345458 kernel: pci 0000:00:02.6: bridge window [mem 0xfdc00000-0xfddfffff] Nov 5 15:50:49.345517 kernel: pci 0000:00:02.6: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 5 15:50:49.345573 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Nov 5 15:50:49.345629 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Nov 5 15:50:49.345686 kernel: pci 0000:00:02.7: bridge window [mem 0xfda00000-0xfdbfffff] Nov 5 15:50:49.345754 kernel: pci 0000:00:02.7: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 5 15:50:49.345811 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Nov 5 15:50:49.345867 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Nov 5 15:50:49.345923 kernel: pci 0000:00:03.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 5 15:50:49.345977 kernel: pci 0000:00:03.0: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 5 15:50:49.346034 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:50:49.346085 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:50:49.346135 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:50:49.346209 kernel: pci_bus 0000:00: resource 7 [mem 0x7d000000-0xafffffff window] Nov 5 15:50:49.346268 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 15:50:49.346318 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 5 15:50:49.346377 kernel: pci_bus 0000:01: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 5 15:50:49.346446 kernel: pci_bus 0000:01: resource 2 [mem 0xfd000000-0xfd1fffff 64bit pref] Nov 5 15:50:49.346506 kernel: pci_bus 0000:02: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 5 15:50:49.346562 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 5 15:50:49.346620 kernel: pci_bus 0000:03: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 5 15:50:49.346673 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 5 15:50:49.346747 kernel: pci_bus 0000:04: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 5 15:50:49.346800 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 5 15:50:49.346870 kernel: pci_bus 0000:05: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 5 15:50:49.346948 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 5 15:50:49.347032 kernel: pci_bus 0000:06: resource 1 [mem 0xfde00000-0xfdffffff] Nov 5 15:50:49.347105 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 5 15:50:49.348233 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Nov 5 15:50:49.348319 kernel: pci_bus 0000:07: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 5 15:50:49.348376 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 5 15:50:49.348437 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Nov 5 15:50:49.348491 kernel: pci_bus 0000:08: resource 1 [mem 0xfda00000-0xfdbfffff] Nov 5 15:50:49.348543 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 5 15:50:49.348599 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Nov 5 15:50:49.348653 kernel: pci_bus 0000:09: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 5 15:50:49.348718 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 5 15:50:49.348726 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 15:50:49.348732 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:50:49.348738 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Nov 5 15:50:49.348744 kernel: Initialise system trusted keyrings Nov 5 15:50:49.348750 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 5 15:50:49.348757 kernel: Key type asymmetric registered Nov 5 15:50:49.348763 kernel: Asymmetric key parser 'x509' registered Nov 5 15:50:49.348769 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:50:49.348774 kernel: io scheduler mq-deadline registered Nov 5 15:50:49.348780 kernel: io scheduler kyber registered Nov 5 15:50:49.348786 kernel: io scheduler bfq registered Nov 5 15:50:49.348846 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 5 15:50:49.348908 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 5 15:50:49.348965 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 5 15:50:49.349022 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 5 15:50:49.349079 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 5 15:50:49.349134 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 5 15:50:49.349232 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 5 15:50:49.349297 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 5 15:50:49.349355 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 5 15:50:49.349411 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 5 15:50:49.349467 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 5 15:50:49.349522 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 5 15:50:49.349578 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 5 15:50:49.349634 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 5 15:50:49.349701 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 5 15:50:49.349758 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 5 15:50:49.349767 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 15:50:49.349822 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Nov 5 15:50:49.349877 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Nov 5 15:50:49.349886 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:50:49.349892 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Nov 5 15:50:49.349898 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:50:49.349904 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:50:49.349910 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:50:49.349915 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:50:49.349921 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:50:49.349929 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:50:49.349993 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 5 15:50:49.350047 kernel: rtc_cmos 00:03: registered as rtc0 Nov 5 15:50:49.350099 kernel: rtc_cmos 00:03: setting system clock to 2025-11-05T15:50:47 UTC (1762357847) Nov 5 15:50:49.350151 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 15:50:49.350158 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 15:50:49.350187 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:50:49.350196 kernel: Segment Routing with IPv6 Nov 5 15:50:49.350206 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:50:49.350215 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:50:49.350224 kernel: Key type dns_resolver registered Nov 5 15:50:49.350231 kernel: IPI shorthand broadcast: enabled Nov 5 15:50:49.350237 kernel: sched_clock: Marking stable (1466015181, 155391897)->(1633057021, -11649943) Nov 5 15:50:49.350244 kernel: registered taskstats version 1 Nov 5 15:50:49.350249 kernel: Loading compiled-in X.509 certificates Nov 5 15:50:49.350255 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:50:49.350261 kernel: Demotion targets for Node 0: null Nov 5 15:50:49.350267 kernel: Key type .fscrypt registered Nov 5 15:50:49.350272 kernel: Key type fscrypt-provisioning registered Nov 5 15:50:49.350278 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:50:49.350285 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:50:49.350290 kernel: ima: No architecture policies found Nov 5 15:50:49.350296 kernel: clk: Disabling unused clocks Nov 5 15:50:49.350301 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:50:49.350307 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:50:49.350313 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:50:49.350319 kernel: Run /init as init process Nov 5 15:50:49.350326 kernel: with arguments: Nov 5 15:50:49.350332 kernel: /init Nov 5 15:50:49.350337 kernel: with environment: Nov 5 15:50:49.350343 kernel: HOME=/ Nov 5 15:50:49.350348 kernel: TERM=linux Nov 5 15:50:49.350354 kernel: ACPI: bus type USB registered Nov 5 15:50:49.350359 kernel: usbcore: registered new interface driver usbfs Nov 5 15:50:49.350365 kernel: usbcore: registered new interface driver hub Nov 5 15:50:49.350372 kernel: usbcore: registered new device driver usb Nov 5 15:50:49.350446 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 5 15:50:49.350506 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Nov 5 15:50:49.350565 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 5 15:50:49.350622 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Nov 5 15:50:49.350680 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Nov 5 15:50:49.350751 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Nov 5 15:50:49.350836 kernel: hub 1-0:1.0: USB hub found Nov 5 15:50:49.350900 kernel: hub 1-0:1.0: 4 ports detected Nov 5 15:50:49.350972 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 5 15:50:49.351043 kernel: hub 2-0:1.0: USB hub found Nov 5 15:50:49.351108 kernel: hub 2-0:1.0: 4 ports detected Nov 5 15:50:49.351116 kernel: SCSI subsystem initialized Nov 5 15:50:49.351122 kernel: libata version 3.00 loaded. Nov 5 15:50:49.351220 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 15:50:49.351230 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 15:50:49.351290 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 15:50:49.351349 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 15:50:49.351407 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 15:50:49.351476 kernel: scsi host0: ahci Nov 5 15:50:49.351543 kernel: scsi host1: ahci Nov 5 15:50:49.351605 kernel: scsi host2: ahci Nov 5 15:50:49.351669 kernel: scsi host3: ahci Nov 5 15:50:49.351747 kernel: scsi host4: ahci Nov 5 15:50:49.351811 kernel: scsi host5: ahci Nov 5 15:50:49.351819 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a100 irq 38 lpm-pol 1 Nov 5 15:50:49.351825 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a180 irq 38 lpm-pol 1 Nov 5 15:50:49.351830 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a200 irq 38 lpm-pol 1 Nov 5 15:50:49.351838 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a280 irq 38 lpm-pol 1 Nov 5 15:50:49.351843 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a300 irq 38 lpm-pol 1 Nov 5 15:50:49.351849 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea1a000 port 0xfea1a380 irq 38 lpm-pol 1 Nov 5 15:50:49.351922 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 5 15:50:49.351931 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 15:50:49.351937 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 5 15:50:49.351942 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 15:50:49.351949 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 15:50:49.351955 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 15:50:49.351961 kernel: ata1.00: LPM support broken, forcing max_power Nov 5 15:50:49.351967 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 15:50:49.351972 kernel: ata1.00: applying bridge limits Nov 5 15:50:49.351978 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 15:50:49.351983 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 15:50:49.351989 kernel: ata1.00: LPM support broken, forcing max_power Nov 5 15:50:49.351995 kernel: ata1.00: configured for UDMA/100 Nov 5 15:50:49.352065 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 15:50:49.352073 kernel: usbcore: registered new interface driver usbhid Nov 5 15:50:49.352078 kernel: usbhid: USB HID core driver Nov 5 15:50:49.352148 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Nov 5 15:50:49.352264 kernel: scsi host6: Virtio SCSI HBA Nov 5 15:50:49.352342 kernel: scsi 6:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 5 15:50:49.352407 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 15:50:49.352414 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:50:49.352473 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Nov 5 15:50:49.352536 kernel: sd 6:0:0:0: Power-on or device reset occurred Nov 5 15:50:49.352603 kernel: sd 6:0:0:0: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Nov 5 15:50:49.352664 kernel: sd 6:0:0:0: [sda] Write Protect is off Nov 5 15:50:49.352738 kernel: sd 6:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 5 15:50:49.352801 kernel: sd 6:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 5 15:50:49.352808 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:50:49.352814 kernel: GPT:25804799 != 80003071 Nov 5 15:50:49.352821 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:50:49.352826 kernel: GPT:25804799 != 80003071 Nov 5 15:50:49.352832 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:50:49.352838 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 5 15:50:49.352900 kernel: sd 6:0:0:0: [sda] Attached SCSI disk Nov 5 15:50:49.352907 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 5 15:50:49.352988 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Nov 5 15:50:49.352999 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:50:49.353005 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:50:49.353011 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:50:49.353017 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:50:49.353022 kernel: raid6: avx2x4 gen() 57549 MB/s Nov 5 15:50:49.353028 kernel: raid6: avx2x2 gen() 56270 MB/s Nov 5 15:50:49.353036 kernel: raid6: avx2x1 gen() 40722 MB/s Nov 5 15:50:49.353041 kernel: raid6: using algorithm avx2x4 gen() 57549 MB/s Nov 5 15:50:49.353047 kernel: raid6: .... xor() 4382 MB/s, rmw enabled Nov 5 15:50:49.353053 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:50:49.353058 kernel: xor: automatically using best checksumming function avx Nov 5 15:50:49.353064 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:50:49.353070 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (184) Nov 5 15:50:49.353077 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:50:49.353083 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:49.353088 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:50:49.353094 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:50:49.353100 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:50:49.353106 kernel: loop: module loaded Nov 5 15:50:49.353112 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:50:49.353118 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:50:49.353125 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:50:49.353133 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:50:49.353140 systemd[1]: Detected virtualization kvm. Nov 5 15:50:49.353147 systemd[1]: Detected architecture x86-64. Nov 5 15:50:49.353153 systemd[1]: Running in initrd. Nov 5 15:50:49.353159 systemd[1]: No hostname configured, using default hostname. Nov 5 15:50:49.353328 systemd[1]: Hostname set to . Nov 5 15:50:49.353335 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:50:49.353341 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:50:49.353352 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:50:49.353528 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:50:49.353539 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:50:49.353549 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:50:49.353555 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:50:49.353562 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:50:49.353568 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:50:49.353575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:50:49.353582 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:50:49.353588 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:50:49.353594 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:50:49.353600 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:50:49.353606 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:50:49.353612 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:50:49.353618 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:50:49.353626 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:50:49.353631 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:50:49.353637 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:50:49.353643 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:50:49.353649 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:50:49.353655 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:50:49.353661 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:50:49.353668 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:50:49.353674 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:50:49.353680 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:50:49.353686 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:50:49.353701 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:50:49.353708 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:50:49.353715 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:50:49.353721 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:50:49.353727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:49.353733 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:50:49.353740 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:50:49.353764 systemd-journald[319]: Collecting audit messages is disabled. Nov 5 15:50:49.353780 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:50:49.353788 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:50:49.353794 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:50:49.353800 kernel: Bridge firewalling registered Nov 5 15:50:49.353806 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:50:49.353813 systemd-journald[319]: Journal started Nov 5 15:50:49.353829 systemd-journald[319]: Runtime Journal (/run/log/journal/4d7b6872bec74d068bcf5c6663c44852) is 4.7M, max 38.3M, 33.5M free. Nov 5 15:50:49.336274 systemd-modules-load[322]: Inserted module 'br_netfilter' Nov 5 15:50:49.359352 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:49.363486 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:50:49.367864 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:50:49.372283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:50:49.426946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:50:49.428537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:49.429875 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:49.432311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:50:49.436387 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:50:49.438633 systemd-tmpfiles[341]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:50:49.439596 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:50:49.442118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:50:49.452187 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:50:49.456247 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:50:49.473423 dracut-cmdline[361]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:50:49.495835 systemd-resolved[346]: Positive Trust Anchors: Nov 5 15:50:49.495853 systemd-resolved[346]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:50:49.495857 systemd-resolved[346]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:50:49.495892 systemd-resolved[346]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:50:49.522470 systemd-resolved[346]: Defaulting to hostname 'linux'. Nov 5 15:50:49.523995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:50:49.525392 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:50:49.548193 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:50:49.563214 kernel: iscsi: registered transport (tcp) Nov 5 15:50:49.579364 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:50:49.579454 kernel: QLogic iSCSI HBA Driver Nov 5 15:50:49.596874 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:50:49.612636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:50:49.614412 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:50:49.639855 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:50:49.641336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:50:49.646553 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:50:49.666072 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:50:49.670275 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:50:49.693821 systemd-udevd[588]: Using default interface naming scheme 'v257'. Nov 5 15:50:49.702006 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:50:49.705250 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:50:49.709134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:50:49.714023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:50:49.722761 dracut-pre-trigger[688]: rd.md=0: removing MD RAID activation Nov 5 15:50:49.742367 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:50:49.744178 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:50:49.754484 systemd-networkd[700]: lo: Link UP Nov 5 15:50:49.754493 systemd-networkd[700]: lo: Gained carrier Nov 5 15:50:49.754839 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:50:49.756785 systemd[1]: Reached target network.target - Network. Nov 5 15:50:49.792385 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:50:49.795398 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:50:49.867524 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 5 15:50:49.884133 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 5 15:50:49.892884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 15:50:49.900832 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 5 15:50:49.903248 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:50:49.906182 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:50:49.913470 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:49.915452 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:49.917450 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:49.921343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:49.923604 disk-uuid[772]: Primary Header is updated. Nov 5 15:50:49.923604 disk-uuid[772]: Secondary Entries is updated. Nov 5 15:50:49.923604 disk-uuid[772]: Secondary Header is updated. Nov 5 15:50:49.934278 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 5 15:50:49.936191 kernel: AES CTR mode by8 optimization enabled Nov 5 15:50:49.955970 systemd-networkd[700]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:49.955981 systemd-networkd[700]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:50:49.956438 systemd-networkd[700]: eth1: Link UP Nov 5 15:50:49.956576 systemd-networkd[700]: eth1: Gained carrier Nov 5 15:50:49.956583 systemd-networkd[700]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:49.987752 systemd-networkd[700]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:49.990189 systemd-networkd[700]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:50:49.990682 systemd-networkd[700]: eth0: Link UP Nov 5 15:50:49.990799 systemd-networkd[700]: eth0: Gained carrier Nov 5 15:50:49.990808 systemd-networkd[700]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:50.032229 systemd-networkd[700]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 5 15:50:50.041207 systemd-networkd[700]: eth0: DHCPv4 address 46.62.132.115/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 5 15:50:50.045319 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:50:50.046235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:50.047536 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:50:50.048710 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:50:50.049641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:50:50.052251 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:50:50.063564 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:50:50.987451 disk-uuid[774]: Warning: The kernel is still using the old partition table. Nov 5 15:50:50.987451 disk-uuid[774]: The new table will be used at the next reboot or after you Nov 5 15:50:50.987451 disk-uuid[774]: run partprobe(8) or kpartx(8) Nov 5 15:50:50.987451 disk-uuid[774]: The operation has completed successfully. Nov 5 15:50:51.001802 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:50:51.001956 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:50:51.005354 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:50:51.066243 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (864) Nov 5 15:50:51.072251 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:51.072346 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:51.083450 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:50:51.083515 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:50:51.087773 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:50:51.099227 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:51.101276 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:50:51.104147 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:50:51.230147 systemd-networkd[700]: eth0: Gained IPv6LL Nov 5 15:50:51.304324 ignition[883]: Ignition 2.22.0 Nov 5 15:50:51.304336 ignition[883]: Stage: fetch-offline Nov 5 15:50:51.305845 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:50:51.304378 ignition[883]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:51.309314 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:50:51.304385 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:51.304663 ignition[883]: parsed url from cmdline: "" Nov 5 15:50:51.304667 ignition[883]: no config URL provided Nov 5 15:50:51.304672 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:50:51.304682 ignition[883]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:50:51.304687 ignition[883]: failed to fetch config: resource requires networking Nov 5 15:50:51.304954 ignition[883]: Ignition finished successfully Nov 5 15:50:51.331534 ignition[889]: Ignition 2.22.0 Nov 5 15:50:51.331545 ignition[889]: Stage: fetch Nov 5 15:50:51.331643 ignition[889]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:51.331647 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:51.331716 ignition[889]: parsed url from cmdline: "" Nov 5 15:50:51.331718 ignition[889]: no config URL provided Nov 5 15:50:51.331722 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:50:51.331734 ignition[889]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:50:51.331757 ignition[889]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Nov 5 15:50:51.336997 ignition[889]: GET result: OK Nov 5 15:50:51.337061 ignition[889]: parsing config with SHA512: d3a1e8253ca95c21417dbb138d2bfaa7852cb12c255cf48a8aba7f3561e4a9d18520c097c7b820346650698dc7f1de453df24718aefc7e5e9a74a2f310b53bf8 Nov 5 15:50:51.344906 unknown[889]: fetched base config from "system" Nov 5 15:50:51.344914 unknown[889]: fetched base config from "system" Nov 5 15:50:51.345616 ignition[889]: fetch: fetch complete Nov 5 15:50:51.344917 unknown[889]: fetched user config from "hetzner" Nov 5 15:50:51.345621 ignition[889]: fetch: fetch passed Nov 5 15:50:51.347321 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:50:51.345660 ignition[889]: Ignition finished successfully Nov 5 15:50:51.355287 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:50:51.384002 ignition[896]: Ignition 2.22.0 Nov 5 15:50:51.384016 ignition[896]: Stage: kargs Nov 5 15:50:51.384139 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:51.386343 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:50:51.384144 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:51.384786 ignition[896]: kargs: kargs passed Nov 5 15:50:51.384815 ignition[896]: Ignition finished successfully Nov 5 15:50:51.390966 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:50:51.412509 ignition[903]: Ignition 2.22.0 Nov 5 15:50:51.412520 ignition[903]: Stage: disks Nov 5 15:50:51.412616 ignition[903]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:51.414645 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:50:51.412622 ignition[903]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:51.415388 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:50:51.413444 ignition[903]: disks: disks passed Nov 5 15:50:51.416665 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:50:51.413476 ignition[903]: Ignition finished successfully Nov 5 15:50:51.418208 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:50:51.419939 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:50:51.421651 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:50:51.423720 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:50:51.465652 systemd-fsck[911]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:50:51.468867 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:50:51.472004 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:50:51.588247 kernel: EXT4-fs (sda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:50:51.588497 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:50:51.589502 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:50:51.592325 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:50:51.596201 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:50:51.602786 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 5 15:50:51.603944 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:50:51.604798 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:50:51.607796 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:50:51.616188 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (919) Nov 5 15:50:51.617667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:50:51.624690 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:51.624725 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:51.637012 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:50:51.637055 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:50:51.637069 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:50:51.639282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:50:51.700080 coreos-metadata[921]: Nov 05 15:50:51.700 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Nov 5 15:50:51.704142 coreos-metadata[921]: Nov 05 15:50:51.700 INFO Fetch successful Nov 5 15:50:51.704142 coreos-metadata[921]: Nov 05 15:50:51.701 INFO wrote hostname ci-4487-0-1-1-e8a5680daa to /sysroot/etc/hostname Nov 5 15:50:51.702861 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:50:51.708658 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:50:51.713295 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:50:51.717777 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:50:51.721015 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:50:51.829919 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:50:51.832551 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:50:51.835325 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:50:51.865486 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:50:51.871324 kernel: BTRFS info (device sda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:51.889889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:50:51.902425 ignition[1038]: INFO : Ignition 2.22.0 Nov 5 15:50:51.902425 ignition[1038]: INFO : Stage: mount Nov 5 15:50:51.903597 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:51.903597 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:51.905216 ignition[1038]: INFO : mount: mount passed Nov 5 15:50:51.905216 ignition[1038]: INFO : Ignition finished successfully Nov 5 15:50:51.906714 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:50:51.909861 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:50:51.996482 systemd-networkd[700]: eth1: Gained IPv6LL Nov 5 15:50:52.592338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:50:52.631223 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1048) Nov 5 15:50:52.637692 kernel: BTRFS info (device sda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:50:52.637781 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:50:52.648442 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 5 15:50:52.648507 kernel: BTRFS info (device sda6): turning on async discard Nov 5 15:50:52.651389 kernel: BTRFS info (device sda6): enabling free space tree Nov 5 15:50:52.656494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:50:52.692196 ignition[1064]: INFO : Ignition 2.22.0 Nov 5 15:50:52.692196 ignition[1064]: INFO : Stage: files Nov 5 15:50:52.694132 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:52.694132 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:52.694132 ignition[1064]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:50:52.697427 ignition[1064]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:50:52.697427 ignition[1064]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:50:52.699899 ignition[1064]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:50:52.699899 ignition[1064]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:50:52.702516 ignition[1064]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:50:52.701326 unknown[1064]: wrote ssh authorized keys file for user: core Nov 5 15:50:52.705262 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:50:52.705262 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:50:52.852438 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:50:53.155531 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:50:53.155531 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:50:53.159325 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:50:53.179464 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:50:53.179464 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:50:53.179464 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:53.179464 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:53.179464 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:53.179464 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 5 15:50:53.609477 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:50:55.213805 ignition[1064]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 5 15:50:55.213805 ignition[1064]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:50:55.216382 ignition[1064]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:50:55.220260 ignition[1064]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:50:55.220260 ignition[1064]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:50:55.220260 ignition[1064]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:50:55.227039 ignition[1064]: INFO : files: files passed Nov 5 15:50:55.227039 ignition[1064]: INFO : Ignition finished successfully Nov 5 15:50:55.223600 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:50:55.228285 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:50:55.236822 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:50:55.242927 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:50:55.244686 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:50:55.257478 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:50:55.258478 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:50:55.258478 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:50:55.262426 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:50:55.264797 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:50:55.268264 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:50:55.313619 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:50:55.313727 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:50:55.315328 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:50:55.316230 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:50:55.317966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:50:55.318880 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:50:55.340063 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:50:55.342408 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:50:55.358843 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:50:55.358996 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:50:55.360193 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:50:55.361324 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:50:55.362383 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:50:55.362532 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:50:55.364123 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:50:55.365143 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:50:55.367043 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:50:55.368759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:50:55.370242 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:50:55.372406 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:50:55.374238 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:50:55.376226 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:50:55.378114 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:50:55.380022 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:50:55.381994 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:50:55.383787 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:50:55.383911 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:50:55.386105 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:50:55.387161 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:50:55.388847 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:50:55.388985 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:50:55.390743 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:50:55.390884 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:50:55.393811 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:50:55.393966 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:50:55.394909 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:50:55.395043 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:50:55.396559 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 5 15:50:55.396718 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 5 15:50:55.400256 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:50:55.401370 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:50:55.401522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:50:55.414620 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:50:55.422481 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:50:55.422748 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:50:55.425795 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:50:55.426543 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:50:55.438292 ignition[1121]: INFO : Ignition 2.22.0 Nov 5 15:50:55.438292 ignition[1121]: INFO : Stage: umount Nov 5 15:50:55.438292 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:50:55.438292 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Nov 5 15:50:55.428988 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:50:55.444064 ignition[1121]: INFO : umount: umount passed Nov 5 15:50:55.444064 ignition[1121]: INFO : Ignition finished successfully Nov 5 15:50:55.429216 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:50:55.443005 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:50:55.445035 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:50:55.449293 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:50:55.449383 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:50:55.451326 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:50:55.451414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:50:55.453406 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:50:55.453461 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:50:55.455953 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:50:55.456010 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:50:55.458315 systemd[1]: Stopped target network.target - Network. Nov 5 15:50:55.459752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:50:55.459831 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:50:55.460951 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:50:55.462419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:50:55.466461 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:50:55.467824 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:50:55.469531 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:50:55.471128 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:50:55.471225 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:50:55.472824 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:50:55.472874 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:50:55.474617 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:50:55.474691 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:50:55.476277 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:50:55.476361 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:50:55.477933 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:50:55.479528 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:50:55.482921 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:50:55.483854 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:50:55.483979 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:50:55.487450 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:50:55.487562 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:50:55.491538 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:50:55.491641 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:50:55.496712 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:50:55.496779 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:50:55.498985 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:50:55.500586 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:50:55.500609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:50:55.503230 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:50:55.503658 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:50:55.503694 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:50:55.506820 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:50:55.506852 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:55.508262 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:50:55.508291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:50:55.510060 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:50:55.522669 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:50:55.522776 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:50:55.524198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:50:55.524224 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:50:55.526205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:50:55.526225 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:50:55.526638 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:50:55.526671 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:50:55.527094 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:50:55.527115 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:50:55.527581 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:50:55.527608 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:50:55.528608 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:50:55.530016 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:50:55.530051 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:50:55.532990 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:50:55.533019 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:50:55.534534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:55.534565 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:55.543875 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:50:55.543934 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:50:55.549117 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:50:55.549200 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:50:55.551313 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:50:55.553481 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:50:55.590344 systemd[1]: Switching root. Nov 5 15:50:55.633200 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Nov 5 15:50:55.633279 systemd-journald[319]: Journal stopped Nov 5 15:50:56.472514 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:50:56.472556 kernel: SELinux: policy capability open_perms=1 Nov 5 15:50:56.472564 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:50:56.472573 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:50:56.472585 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:50:56.472591 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:50:56.472600 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:50:56.472609 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:50:56.472618 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:50:56.472627 kernel: audit: type=1403 audit(1762357855.802:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:50:56.472635 systemd[1]: Successfully loaded SELinux policy in 55.722ms. Nov 5 15:50:56.472646 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.271ms. Nov 5 15:50:56.472653 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:50:56.472661 systemd[1]: Detected virtualization kvm. Nov 5 15:50:56.472669 systemd[1]: Detected architecture x86-64. Nov 5 15:50:56.472678 systemd[1]: Detected first boot. Nov 5 15:50:56.472686 systemd[1]: Hostname set to . Nov 5 15:50:56.472694 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:50:56.472714 zram_generator::config[1164]: No configuration found. Nov 5 15:50:56.472725 kernel: Guest personality initialized and is inactive Nov 5 15:50:56.472736 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:50:56.472747 kernel: Initialized host personality Nov 5 15:50:56.472757 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:50:56.472765 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:50:56.472772 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:50:56.472779 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:50:56.472787 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:50:56.472795 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:50:56.472805 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:50:56.472816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:50:56.472827 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:50:56.472834 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:50:56.472842 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:50:56.472850 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:50:56.472859 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:50:56.472867 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:50:56.472874 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:50:56.472884 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:50:56.472894 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:50:56.472901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:50:56.472910 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:50:56.472918 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:50:56.472925 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:50:56.472934 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:50:56.472943 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:50:56.472951 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:50:56.472960 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:50:56.473500 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:50:56.473513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:50:56.473521 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:50:56.473532 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:50:56.473547 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:50:56.473559 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:50:56.473570 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:50:56.473578 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:50:56.473586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:50:56.473593 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:50:56.473600 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:50:56.473608 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:50:56.473616 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:50:56.473624 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:50:56.473635 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:50:56.473647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:56.473660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:50:56.473671 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:50:56.473684 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:50:56.473697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:50:56.473719 systemd[1]: Reached target machines.target - Containers. Nov 5 15:50:56.473731 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:50:56.473743 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:56.473754 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:50:56.473768 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:50:56.473781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:50:56.473793 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:50:56.473800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:50:56.473807 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:50:56.473817 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:50:56.473824 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:50:56.473832 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:50:56.473840 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:50:56.473848 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:50:56.473855 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:50:56.473863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:56.473870 kernel: ACPI: bus type drm_connector registered Nov 5 15:50:56.473879 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:50:56.474525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:50:56.474541 kernel: fuse: init (API version 7.41) Nov 5 15:50:56.474551 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:50:56.474560 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:50:56.474568 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:50:56.474577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:50:56.474586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:56.474596 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:50:56.474628 systemd-journald[1255]: Collecting audit messages is disabled. Nov 5 15:50:56.474646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:50:56.474657 systemd-journald[1255]: Journal started Nov 5 15:50:56.474674 systemd-journald[1255]: Runtime Journal (/run/log/journal/4d7b6872bec74d068bcf5c6663c44852) is 4.7M, max 38.3M, 33.5M free. Nov 5 15:50:56.211791 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:50:56.477258 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:50:56.219541 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 5 15:50:56.219877 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:50:56.477272 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:50:56.478785 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:50:56.479457 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:50:56.480081 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:50:56.480943 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:50:56.481744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:50:56.482537 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:50:56.482750 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:50:56.483538 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:50:56.483741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:50:56.484680 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:50:56.484895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:50:56.485762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:50:56.485956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:50:56.486734 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:50:56.486951 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:50:56.487814 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:50:56.488022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:50:56.488895 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:50:56.489977 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:50:56.491533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:50:56.492473 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:50:56.499676 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:50:56.501077 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:50:56.504259 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:50:56.505240 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:50:56.505664 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:50:56.505682 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:50:56.506753 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:50:56.507381 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:56.513129 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:50:56.515923 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:50:56.516425 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:50:56.517251 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:50:56.517883 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:50:56.519253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:50:56.520302 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:50:56.522332 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:50:56.524025 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:50:56.525268 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:50:56.542266 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:50:56.545744 systemd-journald[1255]: Time spent on flushing to /var/log/journal/4d7b6872bec74d068bcf5c6663c44852 is 27.030ms for 1152 entries. Nov 5 15:50:56.545744 systemd-journald[1255]: System Journal (/var/log/journal/4d7b6872bec74d068bcf5c6663c44852) is 8M, max 588.1M, 580.1M free. Nov 5 15:50:56.587341 systemd-journald[1255]: Received client request to flush runtime journal. Nov 5 15:50:56.587393 kernel: loop1: detected capacity change from 0 to 128048 Nov 5 15:50:56.547762 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:50:56.548369 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:50:56.551316 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:50:56.588609 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:50:56.593613 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:50:56.598339 kernel: loop2: detected capacity change from 0 to 8 Nov 5 15:50:56.602663 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:50:56.607417 kernel: loop3: detected capacity change from 0 to 229808 Nov 5 15:50:56.604617 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:50:56.609389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:50:56.611248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:50:56.621245 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:50:56.631206 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Nov 5 15:50:56.631577 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Nov 5 15:50:56.634046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:50:56.646192 kernel: loop4: detected capacity change from 0 to 110984 Nov 5 15:50:56.652831 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:50:56.674184 kernel: loop5: detected capacity change from 0 to 128048 Nov 5 15:50:56.692657 systemd-resolved[1306]: Positive Trust Anchors: Nov 5 15:50:56.693178 kernel: loop6: detected capacity change from 0 to 8 Nov 5 15:50:56.692889 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:50:56.692894 systemd-resolved[1306]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:50:56.692915 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:50:56.697273 kernel: loop7: detected capacity change from 0 to 229808 Nov 5 15:50:56.706105 systemd-resolved[1306]: Using system hostname 'ci-4487-0-1-1-e8a5680daa'. Nov 5 15:50:56.706923 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:50:56.707451 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:50:56.718182 kernel: loop1: detected capacity change from 0 to 110984 Nov 5 15:50:56.726427 (sd-merge)[1319]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Nov 5 15:50:56.729003 (sd-merge)[1319]: Merged extensions into '/usr'. Nov 5 15:50:56.731818 systemd[1]: Reload requested from client PID 1289 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:50:56.731884 systemd[1]: Reloading... Nov 5 15:50:56.781191 zram_generator::config[1343]: No configuration found. Nov 5 15:50:56.899633 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:50:56.899863 systemd[1]: Reloading finished in 167 ms. Nov 5 15:50:56.926395 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:50:56.927230 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:50:56.932262 systemd[1]: Starting ensure-sysext.service... Nov 5 15:50:56.935270 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:50:56.940982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:50:56.949269 systemd[1]: Reload requested from client PID 1390 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:50:56.949282 systemd[1]: Reloading... Nov 5 15:50:56.951854 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:50:56.952042 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:50:56.952243 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:50:56.952440 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:50:56.952893 systemd-tmpfiles[1391]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:50:56.953078 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Nov 5 15:50:56.953156 systemd-tmpfiles[1391]: ACLs are not supported, ignoring. Nov 5 15:50:56.958230 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:50:56.958237 systemd-tmpfiles[1391]: Skipping /boot Nov 5 15:50:56.966461 systemd-tmpfiles[1391]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:50:56.966525 systemd-tmpfiles[1391]: Skipping /boot Nov 5 15:50:56.973150 systemd-udevd[1392]: Using default interface naming scheme 'v257'. Nov 5 15:50:56.992188 zram_generator::config[1421]: No configuration found. Nov 5 15:50:57.095189 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 5 15:50:57.098183 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:50:57.122297 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:50:57.146413 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:50:57.146697 systemd[1]: Reloading finished in 197 ms. Nov 5 15:50:57.155122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:50:57.162770 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:50:57.168932 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Nov 5 15:50:57.172719 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:50:57.175314 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:50:57.175832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:57.177323 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:50:57.182445 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 15:50:57.182625 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:50:57.181278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:50:57.187679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:50:57.194312 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:50:57.195750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:57.195786 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:57.198255 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:50:57.201833 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:50:57.203633 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:50:57.209339 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:50:57.209582 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:50:57.212832 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:50:57.217834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:50:57.225985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:50:57.227721 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:50:57.242208 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Nov 5 15:50:57.245190 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Nov 5 15:50:57.251212 kernel: Console: switching to colour dummy device 80x25 Nov 5 15:50:57.254390 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Nov 5 15:50:57.254433 kernel: [drm] features: -context_init Nov 5 15:50:57.257962 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:57.258082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:57.261196 kernel: [drm] number of scanouts: 1 Nov 5 15:50:57.262192 kernel: [drm] number of cap sets: 0 Nov 5 15:50:57.265186 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Nov 5 15:50:57.268129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:50:57.269809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:50:57.271871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:50:57.271961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:57.272023 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:57.272080 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:57.275257 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Nov 5 15:50:57.281135 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 15:50:57.280921 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:57.281327 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:50:57.286275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:50:57.292194 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Nov 5 15:50:57.288263 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:50:57.288381 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:50:57.288552 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:50:57.290998 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:50:57.296212 systemd[1]: Finished ensure-sysext.service. Nov 5 15:50:57.299772 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:50:57.302494 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:50:57.310063 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:50:57.328358 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 5 15:50:57.332332 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:50:57.336438 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:50:57.336558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:50:57.336833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:50:57.364066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:50:57.365085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:50:57.373803 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:50:57.374014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:50:57.377798 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:50:57.389249 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:57.410635 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:50:57.410854 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:50:57.414412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:57.414764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:57.417687 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:57.428071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:50:57.428652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:57.432099 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:50:57.433035 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:50:57.436057 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:50:57.440079 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:50:57.441441 augenrules[1565]: No rules Nov 5 15:50:57.441330 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:50:57.441805 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:50:57.508602 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:50:57.509392 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:50:57.517855 systemd-networkd[1509]: lo: Link UP Nov 5 15:50:57.517864 systemd-networkd[1509]: lo: Gained carrier Nov 5 15:50:57.519812 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:50:57.520259 systemd[1]: Reached target network.target - Network. Nov 5 15:50:57.521390 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:57.521398 systemd-networkd[1509]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:50:57.521879 systemd-networkd[1509]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:57.521883 systemd-networkd[1509]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:50:57.522249 systemd-networkd[1509]: eth0: Link UP Nov 5 15:50:57.522372 systemd-networkd[1509]: eth0: Gained carrier Nov 5 15:50:57.522381 systemd-networkd[1509]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:57.522778 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:50:57.530694 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:50:57.532408 systemd-networkd[1509]: eth1: Link UP Nov 5 15:50:57.532996 systemd-networkd[1509]: eth1: Gained carrier Nov 5 15:50:57.533016 systemd-networkd[1509]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:50:57.555612 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:50:57.563117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:50:57.566408 systemd-networkd[1509]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Nov 5 15:50:57.568684 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Nov 5 15:50:57.587239 systemd-networkd[1509]: eth0: DHCPv4 address 46.62.132.115/32, gateway 172.31.1.1 acquired from 172.31.1.1 Nov 5 15:50:57.587550 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Nov 5 15:50:57.588131 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Nov 5 15:50:57.814531 ldconfig[1503]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:50:57.825133 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:50:57.829468 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:50:57.851476 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:50:57.854528 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:50:57.855073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:50:57.855556 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:50:57.856230 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:50:57.859197 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:50:57.859519 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:50:57.859790 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:50:57.860153 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:50:57.860306 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:50:57.861031 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:50:57.865175 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:50:57.866778 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:50:57.871836 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:50:57.875972 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:50:57.876662 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:50:57.896155 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:50:57.898053 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:50:57.901129 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:50:57.903929 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:50:57.904738 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:50:57.905064 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:50:57.905080 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:50:57.905958 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:50:57.910262 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:50:57.911535 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:50:57.919358 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:50:57.924352 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:50:57.932651 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:50:57.935006 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:50:57.935508 jq[1598]: false Nov 5 15:50:57.937598 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:50:57.939258 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:50:57.942236 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:50:57.947158 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Nov 5 15:50:57.952333 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:50:57.957990 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:50:57.963513 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:50:57.976303 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:50:57.978997 coreos-metadata[1593]: Nov 05 15:50:57.978 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Nov 5 15:50:57.979322 coreos-metadata[1593]: Nov 05 15:50:57.979 INFO Fetch successful Nov 5 15:50:57.979395 coreos-metadata[1593]: Nov 05 15:50:57.979 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Nov 5 15:50:57.979456 coreos-metadata[1593]: Nov 05 15:50:57.979 INFO Fetch successful Nov 5 15:50:57.981310 google_oslogin_nss_cache[1600]: oslogin_cache_refresh[1600]: Refreshing passwd entry cache Nov 5 15:50:57.982204 oslogin_cache_refresh[1600]: Refreshing passwd entry cache Nov 5 15:50:57.982328 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:50:57.983478 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:50:57.984802 google_oslogin_nss_cache[1600]: oslogin_cache_refresh[1600]: Failure getting users, quitting Nov 5 15:50:57.985544 oslogin_cache_refresh[1600]: Failure getting users, quitting Nov 5 15:50:57.985864 google_oslogin_nss_cache[1600]: oslogin_cache_refresh[1600]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:50:57.985864 google_oslogin_nss_cache[1600]: oslogin_cache_refresh[1600]: Refreshing group entry cache Nov 5 15:50:57.985568 oslogin_cache_refresh[1600]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:50:57.985600 oslogin_cache_refresh[1600]: Refreshing group entry cache Nov 5 15:50:57.986268 google_oslogin_nss_cache[1600]: oslogin_cache_refresh[1600]: Failure getting groups, quitting Nov 5 15:50:57.986300 oslogin_cache_refresh[1600]: Failure getting groups, quitting Nov 5 15:50:57.986338 google_oslogin_nss_cache[1600]: oslogin_cache_refresh[1600]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:50:57.986356 oslogin_cache_refresh[1600]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:50:57.988921 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:50:57.989302 extend-filesystems[1599]: Found /dev/sda6 Nov 5 15:50:57.994519 extend-filesystems[1599]: Found /dev/sda9 Nov 5 15:50:58.000652 extend-filesystems[1599]: Checking size of /dev/sda9 Nov 5 15:50:58.002271 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:50:58.006886 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:50:58.007272 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:50:58.007685 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:50:58.007921 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:50:58.010856 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:50:58.011151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:50:58.015501 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:50:58.015635 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:50:58.037276 jq[1615]: true Nov 5 15:50:58.040201 tar[1627]: linux-amd64/LICENSE Nov 5 15:50:58.040201 tar[1627]: linux-amd64/helm Nov 5 15:50:58.048755 extend-filesystems[1599]: Resized partition /dev/sda9 Nov 5 15:50:58.053673 update_engine[1613]: I20251105 15:50:58.051244 1613 main.cc:92] Flatcar Update Engine starting Nov 5 15:50:58.057727 (ntainerd)[1640]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:50:58.067270 extend-filesystems[1655]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:50:58.070190 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:50:58.069799 dbus-daemon[1594]: [system] SELinux support is enabled Nov 5 15:50:58.076965 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:50:58.076986 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:50:58.085089 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Nov 5 15:50:58.085268 update_engine[1613]: I20251105 15:50:58.082422 1613 update_check_scheduler.cc:74] Next update check in 10m43s Nov 5 15:50:58.085320 jq[1646]: true Nov 5 15:50:58.080230 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:50:58.080246 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:50:58.089872 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:50:58.105656 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:50:58.137800 systemd-logind[1606]: New seat seat0. Nov 5 15:50:58.138796 systemd-logind[1606]: Watching system buttons on /dev/input/event3 (Power Button) Nov 5 15:50:58.138814 systemd-logind[1606]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:50:58.139009 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:50:58.149831 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:50:58.151375 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:50:58.252328 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:50:58.253028 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:50:58.257763 systemd[1]: Starting sshkeys.service... Nov 5 15:50:58.268579 locksmithd[1661]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:50:58.277178 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Nov 5 15:50:58.282533 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:50:58.289396 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:50:58.311345 sshd_keygen[1618]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:50:58.313158 extend-filesystems[1655]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 5 15:50:58.313158 extend-filesystems[1655]: old_desc_blocks = 1, new_desc_blocks = 5 Nov 5 15:50:58.313158 extend-filesystems[1655]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Nov 5 15:50:58.319633 extend-filesystems[1599]: Resized filesystem in /dev/sda9 Nov 5 15:50:58.314127 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:50:58.314344 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:50:58.335380 coreos-metadata[1692]: Nov 05 15:50:58.333 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Nov 5 15:50:58.335380 coreos-metadata[1692]: Nov 05 15:50:58.334 INFO Fetch successful Nov 5 15:50:58.337527 containerd[1640]: time="2025-11-05T15:50:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:50:58.336531 unknown[1692]: wrote ssh authorized keys file for user: core Nov 5 15:50:58.337748 containerd[1640]: time="2025-11-05T15:50:58.337605624Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350603459Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.6µs" Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350631839Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350645869Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350755219Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350763669Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350779789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350813519Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350819419Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350957789Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350971149Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350979429Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351571 containerd[1640]: time="2025-11-05T15:50:58.350984009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351028469Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351142219Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351158139Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351295619Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351325569Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351492389Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:50:58.351772 containerd[1640]: time="2025-11-05T15:50:58.351540609Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:50:58.355515 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:50:58.360407 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:50:58.363281 containerd[1640]: time="2025-11-05T15:50:58.363242484Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:50:58.363336 containerd[1640]: time="2025-11-05T15:50:58.363322514Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:50:58.363360 containerd[1640]: time="2025-11-05T15:50:58.363343634Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:50:58.363792 containerd[1640]: time="2025-11-05T15:50:58.363356484Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:50:58.363831 containerd[1640]: time="2025-11-05T15:50:58.363814505Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:50:58.364244 containerd[1640]: time="2025-11-05T15:50:58.364207685Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:50:58.364244 containerd[1640]: time="2025-11-05T15:50:58.364239595Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:50:58.364286 containerd[1640]: time="2025-11-05T15:50:58.364253985Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:50:58.365242 containerd[1640]: time="2025-11-05T15:50:58.365217795Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:50:58.365308 containerd[1640]: time="2025-11-05T15:50:58.365245635Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:50:58.365308 containerd[1640]: time="2025-11-05T15:50:58.365256735Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:50:58.365308 containerd[1640]: time="2025-11-05T15:50:58.365271745Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:50:58.365442 containerd[1640]: time="2025-11-05T15:50:58.365413595Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:50:58.365468 containerd[1640]: time="2025-11-05T15:50:58.365441175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:50:58.365490 containerd[1640]: time="2025-11-05T15:50:58.365477975Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:50:58.365522 containerd[1640]: time="2025-11-05T15:50:58.365496795Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:50:58.365522 containerd[1640]: time="2025-11-05T15:50:58.365508475Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:50:58.365551 containerd[1640]: time="2025-11-05T15:50:58.365521875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:50:58.365586 containerd[1640]: time="2025-11-05T15:50:58.365549465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:50:58.365586 containerd[1640]: time="2025-11-05T15:50:58.365560575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:50:58.365586 containerd[1640]: time="2025-11-05T15:50:58.365572665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:50:58.365586 containerd[1640]: time="2025-11-05T15:50:58.365582745Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:50:58.365632 containerd[1640]: time="2025-11-05T15:50:58.365607185Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:50:58.365733 containerd[1640]: time="2025-11-05T15:50:58.365690825Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:50:58.365754 containerd[1640]: time="2025-11-05T15:50:58.365737645Z" level=info msg="Start snapshots syncer" Nov 5 15:50:58.365778 containerd[1640]: time="2025-11-05T15:50:58.365764005Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:50:58.366177 containerd[1640]: time="2025-11-05T15:50:58.366119115Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:50:58.367530 update-ssh-keys[1705]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:50:58.368468 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:50:58.370268 containerd[1640]: time="2025-11-05T15:50:58.370241697Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:50:58.371349 containerd[1640]: time="2025-11-05T15:50:58.371321898Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:50:58.371570 containerd[1640]: time="2025-11-05T15:50:58.371550348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:50:58.371615 containerd[1640]: time="2025-11-05T15:50:58.371601488Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:50:58.371630 containerd[1640]: time="2025-11-05T15:50:58.371619238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:50:58.371641 containerd[1640]: time="2025-11-05T15:50:58.371630098Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:50:58.371654 containerd[1640]: time="2025-11-05T15:50:58.371642038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:50:58.371681 containerd[1640]: time="2025-11-05T15:50:58.371668118Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:50:58.371723 containerd[1640]: time="2025-11-05T15:50:58.371683398Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:50:58.371736 containerd[1640]: time="2025-11-05T15:50:58.371720188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:50:58.371759 containerd[1640]: time="2025-11-05T15:50:58.371731908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:50:58.371772 containerd[1640]: time="2025-11-05T15:50:58.371765378Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:50:58.371817 containerd[1640]: time="2025-11-05T15:50:58.371804668Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:50:58.371883 containerd[1640]: time="2025-11-05T15:50:58.371867618Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:50:58.371898 containerd[1640]: time="2025-11-05T15:50:58.371883558Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:50:58.371911 containerd[1640]: time="2025-11-05T15:50:58.371895558Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:50:58.371911 containerd[1640]: time="2025-11-05T15:50:58.371903518Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:50:58.371933 containerd[1640]: time="2025-11-05T15:50:58.371916778Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:50:58.371958 containerd[1640]: time="2025-11-05T15:50:58.371946398Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:50:58.371978 containerd[1640]: time="2025-11-05T15:50:58.371967198Z" level=info msg="runtime interface created" Nov 5 15:50:58.371991 containerd[1640]: time="2025-11-05T15:50:58.371977728Z" level=info msg="created NRI interface" Nov 5 15:50:58.371991 containerd[1640]: time="2025-11-05T15:50:58.371986078Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:50:58.372015 containerd[1640]: time="2025-11-05T15:50:58.371998438Z" level=info msg="Connect containerd service" Nov 5 15:50:58.372205 containerd[1640]: time="2025-11-05T15:50:58.372158628Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:50:58.373274 systemd[1]: Finished sshkeys.service. Nov 5 15:50:58.377630 containerd[1640]: time="2025-11-05T15:50:58.377335570Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:50:58.394059 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:50:58.394382 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:50:58.399912 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:50:58.424801 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:50:58.428100 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:50:58.431768 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:50:58.432123 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:50:58.502245 containerd[1640]: time="2025-11-05T15:50:58.502208652Z" level=info msg="Start subscribing containerd event" Nov 5 15:50:58.502373 containerd[1640]: time="2025-11-05T15:50:58.502366532Z" level=info msg="Start recovering state" Nov 5 15:50:58.502471 containerd[1640]: time="2025-11-05T15:50:58.502464772Z" level=info msg="Start event monitor" Nov 5 15:50:58.502781 containerd[1640]: time="2025-11-05T15:50:58.502770162Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:50:58.502819 containerd[1640]: time="2025-11-05T15:50:58.502814062Z" level=info msg="Start streaming server" Nov 5 15:50:58.502849 containerd[1640]: time="2025-11-05T15:50:58.502843532Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:50:58.502871 containerd[1640]: time="2025-11-05T15:50:58.502867032Z" level=info msg="runtime interface starting up..." Nov 5 15:50:58.502890 containerd[1640]: time="2025-11-05T15:50:58.502886462Z" level=info msg="starting plugins..." Nov 5 15:50:58.502917 containerd[1640]: time="2025-11-05T15:50:58.502913552Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:50:58.502991 containerd[1640]: time="2025-11-05T15:50:58.502750592Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:50:58.503050 containerd[1640]: time="2025-11-05T15:50:58.503043602Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:50:58.503117 containerd[1640]: time="2025-11-05T15:50:58.503110023Z" level=info msg="containerd successfully booted in 0.167042s" Nov 5 15:50:58.503243 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:50:58.544970 tar[1627]: linux-amd64/README.md Nov 5 15:50:58.553273 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:50:58.652519 systemd-networkd[1509]: eth0: Gained IPv6LL Nov 5 15:50:58.654309 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Nov 5 15:50:58.656238 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:50:58.658054 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:50:58.664046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:50:58.669023 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:50:58.700195 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:50:58.780447 systemd-networkd[1509]: eth1: Gained IPv6LL Nov 5 15:50:58.781091 systemd-timesyncd[1540]: Network configuration changed, trying to establish connection. Nov 5 15:50:59.593063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:50:59.595543 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:50:59.598861 systemd[1]: Startup finished in 2.858s (kernel) + 6.838s (initrd) + 3.851s (userspace) = 13.548s. Nov 5 15:50:59.612570 (kubelet)[1752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:51:00.227356 kubelet[1752]: E1105 15:51:00.227296 1752 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:51:00.230555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:51:00.230665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:51:00.230954 systemd[1]: kubelet.service: Consumed 1.041s CPU time, 268.1M memory peak. Nov 5 15:51:10.481244 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:51:10.483499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:51:10.633904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:51:10.646579 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:51:10.697590 kubelet[1771]: E1105 15:51:10.697516 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:51:10.701051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:51:10.701230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:51:10.701547 systemd[1]: kubelet.service: Consumed 168ms CPU time, 108.5M memory peak. Nov 5 15:51:20.952645 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:51:20.956623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:51:21.126295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:51:21.138438 (kubelet)[1786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:51:21.192240 kubelet[1786]: E1105 15:51:21.192145 1786 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:51:21.196206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:51:21.196466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:51:21.197125 systemd[1]: kubelet.service: Consumed 181ms CPU time, 110.5M memory peak. Nov 5 15:51:29.061959 systemd-timesyncd[1540]: Contacted time server 194.59.205.229:123 (2.flatcar.pool.ntp.org). Nov 5 15:51:29.062057 systemd-timesyncd[1540]: Initial clock synchronization to Wed 2025-11-05 15:51:29.166315 UTC. Nov 5 15:51:31.448376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:51:31.451706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:51:31.676811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:51:31.691716 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:51:31.750051 kubelet[1800]: E1105 15:51:31.749888 1800 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:51:31.754335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:51:31.754547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:51:31.755302 systemd[1]: kubelet.service: Consumed 229ms CPU time, 109.8M memory peak. Nov 5 15:51:40.388495 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:51:40.391589 systemd[1]: Started sshd@0-46.62.132.115:22-139.178.68.195:52708.service - OpenSSH per-connection server daemon (139.178.68.195:52708). Nov 5 15:51:41.458122 sshd[1808]: Accepted publickey for core from 139.178.68.195 port 52708 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:41.459668 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:41.470750 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:51:41.473128 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:51:41.484256 systemd-logind[1606]: New session 1 of user core. Nov 5 15:51:41.500329 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:51:41.503546 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:51:41.517803 (systemd)[1813]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:51:41.521163 systemd-logind[1606]: New session c1 of user core. Nov 5 15:51:41.686059 systemd[1813]: Queued start job for default target default.target. Nov 5 15:51:41.692787 systemd[1813]: Created slice app.slice - User Application Slice. Nov 5 15:51:41.692806 systemd[1813]: Reached target paths.target - Paths. Nov 5 15:51:41.692831 systemd[1813]: Reached target timers.target - Timers. Nov 5 15:51:41.694378 systemd[1813]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:51:41.703776 systemd[1813]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:51:41.703851 systemd[1813]: Reached target sockets.target - Sockets. Nov 5 15:51:41.704030 systemd[1813]: Reached target basic.target - Basic System. Nov 5 15:51:41.704155 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:51:41.704565 systemd[1813]: Reached target default.target - Main User Target. Nov 5 15:51:41.704606 systemd[1813]: Startup finished in 173ms. Nov 5 15:51:41.713380 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:51:42.005489 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 5 15:51:42.007916 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:51:42.165857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:51:42.184563 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:51:42.244333 kubelet[1831]: E1105 15:51:42.244249 1831 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:51:42.247480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:51:42.247674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:51:42.248078 systemd[1]: kubelet.service: Consumed 183ms CPU time, 110.8M memory peak. Nov 5 15:51:42.425564 systemd[1]: Started sshd@1-46.62.132.115:22-139.178.68.195:52722.service - OpenSSH per-connection server daemon (139.178.68.195:52722). Nov 5 15:51:43.051809 update_engine[1613]: I20251105 15:51:43.051684 1613 update_attempter.cc:509] Updating boot flags... Nov 5 15:51:43.470022 sshd[1839]: Accepted publickey for core from 139.178.68.195 port 52722 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:43.472215 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:43.481087 systemd-logind[1606]: New session 2 of user core. Nov 5 15:51:43.485426 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:51:44.176292 sshd[1862]: Connection closed by 139.178.68.195 port 52722 Nov 5 15:51:44.177242 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:44.183666 systemd-logind[1606]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:51:44.183858 systemd[1]: sshd@1-46.62.132.115:22-139.178.68.195:52722.service: Deactivated successfully. Nov 5 15:51:44.186694 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:51:44.189481 systemd-logind[1606]: Removed session 2. Nov 5 15:51:44.352941 systemd[1]: Started sshd@2-46.62.132.115:22-139.178.68.195:37164.service - OpenSSH per-connection server daemon (139.178.68.195:37164). Nov 5 15:51:45.371160 sshd[1868]: Accepted publickey for core from 139.178.68.195 port 37164 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:45.372987 sshd-session[1868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:45.379227 systemd-logind[1606]: New session 3 of user core. Nov 5 15:51:45.387364 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:51:46.058369 sshd[1871]: Connection closed by 139.178.68.195 port 37164 Nov 5 15:51:46.059202 sshd-session[1868]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:46.063488 systemd-logind[1606]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:51:46.063565 systemd[1]: sshd@2-46.62.132.115:22-139.178.68.195:37164.service: Deactivated successfully. Nov 5 15:51:46.065088 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:51:46.066676 systemd-logind[1606]: Removed session 3. Nov 5 15:51:46.242515 systemd[1]: Started sshd@3-46.62.132.115:22-139.178.68.195:37180.service - OpenSSH per-connection server daemon (139.178.68.195:37180). Nov 5 15:51:47.245514 sshd[1877]: Accepted publickey for core from 139.178.68.195 port 37180 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:47.247224 sshd-session[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:47.252481 systemd-logind[1606]: New session 4 of user core. Nov 5 15:51:47.262393 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:51:47.939962 sshd[1880]: Connection closed by 139.178.68.195 port 37180 Nov 5 15:51:47.940811 sshd-session[1877]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:47.945126 systemd[1]: sshd@3-46.62.132.115:22-139.178.68.195:37180.service: Deactivated successfully. Nov 5 15:51:47.947235 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:51:47.948126 systemd-logind[1606]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:51:47.949678 systemd-logind[1606]: Removed session 4. Nov 5 15:51:48.116609 systemd[1]: Started sshd@4-46.62.132.115:22-139.178.68.195:37188.service - OpenSSH per-connection server daemon (139.178.68.195:37188). Nov 5 15:51:49.145898 sshd[1886]: Accepted publickey for core from 139.178.68.195 port 37188 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:49.147459 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:49.152320 systemd-logind[1606]: New session 5 of user core. Nov 5 15:51:49.159396 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:51:49.694253 sudo[1890]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:51:49.694499 sudo[1890]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:51:49.711126 sudo[1890]: pam_unix(sudo:session): session closed for user root Nov 5 15:51:49.876037 sshd[1889]: Connection closed by 139.178.68.195 port 37188 Nov 5 15:51:49.876900 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:49.881495 systemd[1]: sshd@4-46.62.132.115:22-139.178.68.195:37188.service: Deactivated successfully. Nov 5 15:51:49.881570 systemd-logind[1606]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:51:49.883069 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:51:49.884951 systemd-logind[1606]: Removed session 5. Nov 5 15:51:50.053744 systemd[1]: Started sshd@5-46.62.132.115:22-139.178.68.195:37198.service - OpenSSH per-connection server daemon (139.178.68.195:37198). Nov 5 15:51:51.074547 sshd[1896]: Accepted publickey for core from 139.178.68.195 port 37198 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:51.076130 sshd-session[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:51.082210 systemd-logind[1606]: New session 6 of user core. Nov 5 15:51:51.091497 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:51:51.609148 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:51:51.609523 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:51:51.616336 sudo[1901]: pam_unix(sudo:session): session closed for user root Nov 5 15:51:51.625462 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:51:51.625814 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:51:51.640525 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:51:51.688825 augenrules[1923]: No rules Nov 5 15:51:51.690347 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:51:51.690611 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:51:51.692427 sudo[1900]: pam_unix(sudo:session): session closed for user root Nov 5 15:51:51.855593 sshd[1899]: Connection closed by 139.178.68.195 port 37198 Nov 5 15:51:51.856502 sshd-session[1896]: pam_unix(sshd:session): session closed for user core Nov 5 15:51:51.862602 systemd[1]: sshd@5-46.62.132.115:22-139.178.68.195:37198.service: Deactivated successfully. Nov 5 15:51:51.865686 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:51:51.867275 systemd-logind[1606]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:51:51.869523 systemd-logind[1606]: Removed session 6. Nov 5 15:51:52.065482 systemd[1]: Started sshd@6-46.62.132.115:22-139.178.68.195:37204.service - OpenSSH per-connection server daemon (139.178.68.195:37204). Nov 5 15:51:52.330311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 5 15:51:52.333579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:51:52.531335 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:51:52.542662 (kubelet)[1943]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:51:52.603811 kubelet[1943]: E1105 15:51:52.603620 1943 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:51:52.607033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:51:52.607305 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:51:52.608030 systemd[1]: kubelet.service: Consumed 189ms CPU time, 110.2M memory peak. Nov 5 15:51:53.186413 sshd[1932]: Accepted publickey for core from 139.178.68.195 port 37204 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:51:53.188736 sshd-session[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:51:53.198142 systemd-logind[1606]: New session 7 of user core. Nov 5 15:51:53.204443 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:51:53.771801 sudo[1951]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:51:53.772150 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:51:54.232004 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:51:54.253517 (dockerd)[1968]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:51:54.547471 dockerd[1968]: time="2025-11-05T15:51:54.547296738Z" level=info msg="Starting up" Nov 5 15:51:54.551831 dockerd[1968]: time="2025-11-05T15:51:54.551741165Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:51:54.569305 dockerd[1968]: time="2025-11-05T15:51:54.569207282Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:51:54.589800 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1526906511-merged.mount: Deactivated successfully. Nov 5 15:51:54.610691 systemd[1]: var-lib-docker-metacopy\x2dcheck2033790594-merged.mount: Deactivated successfully. Nov 5 15:51:54.636185 dockerd[1968]: time="2025-11-05T15:51:54.636109898Z" level=info msg="Loading containers: start." Nov 5 15:51:54.646227 kernel: Initializing XFRM netlink socket Nov 5 15:51:54.885671 systemd-networkd[1509]: docker0: Link UP Nov 5 15:51:54.896494 dockerd[1968]: time="2025-11-05T15:51:54.896392392Z" level=info msg="Loading containers: done." Nov 5 15:51:54.917080 dockerd[1968]: time="2025-11-05T15:51:54.917019636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:51:54.917236 dockerd[1968]: time="2025-11-05T15:51:54.917137299Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:51:54.917302 dockerd[1968]: time="2025-11-05T15:51:54.917278463Z" level=info msg="Initializing buildkit" Nov 5 15:51:54.941235 dockerd[1968]: time="2025-11-05T15:51:54.941141884Z" level=info msg="Completed buildkit initialization" Nov 5 15:51:54.947007 dockerd[1968]: time="2025-11-05T15:51:54.946926503Z" level=info msg="Daemon has completed initialization" Nov 5 15:51:54.947007 dockerd[1968]: time="2025-11-05T15:51:54.946971817Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:51:54.947590 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:51:56.200496 containerd[1640]: time="2025-11-05T15:51:56.200412150Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:51:56.748248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4103082113.mount: Deactivated successfully. Nov 5 15:51:58.042188 containerd[1640]: time="2025-11-05T15:51:58.042116065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:58.046726 containerd[1640]: time="2025-11-05T15:51:58.046673552Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114993" Nov 5 15:51:58.049881 containerd[1640]: time="2025-11-05T15:51:58.049804387Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:58.052219 containerd[1640]: time="2025-11-05T15:51:58.051855816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:58.053910 containerd[1640]: time="2025-11-05T15:51:58.053871553Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 1.853398619s" Nov 5 15:51:58.053910 containerd[1640]: time="2025-11-05T15:51:58.053904933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 5 15:51:58.055223 containerd[1640]: time="2025-11-05T15:51:58.054968715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:51:59.351490 containerd[1640]: time="2025-11-05T15:51:59.351435845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:59.352679 containerd[1640]: time="2025-11-05T15:51:59.352431075Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020866" Nov 5 15:51:59.353654 containerd[1640]: time="2025-11-05T15:51:59.353632623Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:59.355673 containerd[1640]: time="2025-11-05T15:51:59.355648992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:51:59.356268 containerd[1640]: time="2025-11-05T15:51:59.356251086Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.301255594s" Nov 5 15:51:59.356335 containerd[1640]: time="2025-11-05T15:51:59.356326197Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 5 15:51:59.356944 containerd[1640]: time="2025-11-05T15:51:59.356925890Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:52:00.627086 containerd[1640]: time="2025-11-05T15:52:00.627019916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:00.628029 containerd[1640]: time="2025-11-05T15:52:00.627866918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155590" Nov 5 15:52:00.628796 containerd[1640]: time="2025-11-05T15:52:00.628772675Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:00.630969 containerd[1640]: time="2025-11-05T15:52:00.630949863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:00.631803 containerd[1640]: time="2025-11-05T15:52:00.631777041Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.274828975s" Nov 5 15:52:00.631840 containerd[1640]: time="2025-11-05T15:52:00.631808548Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 5 15:52:00.632428 containerd[1640]: time="2025-11-05T15:52:00.632404810Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:52:01.639195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845193222.mount: Deactivated successfully. Nov 5 15:52:01.984582 containerd[1640]: time="2025-11-05T15:52:01.984446447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:01.986197 containerd[1640]: time="2025-11-05T15:52:01.986026136Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929497" Nov 5 15:52:01.987270 containerd[1640]: time="2025-11-05T15:52:01.987234600Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:01.989119 containerd[1640]: time="2025-11-05T15:52:01.989085721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:01.989721 containerd[1640]: time="2025-11-05T15:52:01.989598017Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.357167081s" Nov 5 15:52:01.989721 containerd[1640]: time="2025-11-05T15:52:01.989631314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 5 15:52:01.990083 containerd[1640]: time="2025-11-05T15:52:01.990058931Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:52:02.491389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3892212965.mount: Deactivated successfully. Nov 5 15:52:02.829863 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 5 15:52:02.831756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:03.004329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:03.012424 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:52:03.042493 kubelet[2313]: E1105 15:52:03.042447 2313 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:52:03.044577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:52:03.044686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:52:03.044908 systemd[1]: kubelet.service: Consumed 150ms CPU time, 109.6M memory peak. Nov 5 15:52:03.264454 containerd[1640]: time="2025-11-05T15:52:03.264060673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:03.265466 containerd[1640]: time="2025-11-05T15:52:03.265275412Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Nov 5 15:52:03.266251 containerd[1640]: time="2025-11-05T15:52:03.266228052Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:03.268244 containerd[1640]: time="2025-11-05T15:52:03.268220824Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:03.268907 containerd[1640]: time="2025-11-05T15:52:03.268719092Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.278633735s" Nov 5 15:52:03.268907 containerd[1640]: time="2025-11-05T15:52:03.268745756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 5 15:52:03.269107 containerd[1640]: time="2025-11-05T15:52:03.269079928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:52:03.756860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701710786.mount: Deactivated successfully. Nov 5 15:52:03.763819 containerd[1640]: time="2025-11-05T15:52:03.763720328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:52:03.764873 containerd[1640]: time="2025-11-05T15:52:03.764798086Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Nov 5 15:52:03.766360 containerd[1640]: time="2025-11-05T15:52:03.766290290Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:52:03.768781 containerd[1640]: time="2025-11-05T15:52:03.768718930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:52:03.770310 containerd[1640]: time="2025-11-05T15:52:03.770033926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 500.916492ms" Nov 5 15:52:03.770310 containerd[1640]: time="2025-11-05T15:52:03.770079603Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 5 15:52:03.770735 containerd[1640]: time="2025-11-05T15:52:03.770696249Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:52:04.252734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount768451668.mount: Deactivated successfully. Nov 5 15:52:08.933428 containerd[1640]: time="2025-11-05T15:52:08.933369251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:08.934567 containerd[1640]: time="2025-11-05T15:52:08.934520117Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378491" Nov 5 15:52:08.935654 containerd[1640]: time="2025-11-05T15:52:08.935368218Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:08.938098 containerd[1640]: time="2025-11-05T15:52:08.938045785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:08.938939 containerd[1640]: time="2025-11-05T15:52:08.938920428Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.168185132s" Nov 5 15:52:08.938996 containerd[1640]: time="2025-11-05T15:52:08.938987266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 5 15:52:12.851726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:12.852114 systemd[1]: kubelet.service: Consumed 150ms CPU time, 109.6M memory peak. Nov 5 15:52:12.854295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:12.902594 systemd[1]: Reload requested from client PID 2407 ('systemctl') (unit session-7.scope)... Nov 5 15:52:12.902636 systemd[1]: Reloading... Nov 5 15:52:13.002363 zram_generator::config[2449]: No configuration found. Nov 5 15:52:13.166005 systemd[1]: Reloading finished in 262 ms. Nov 5 15:52:13.212763 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:52:13.212834 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:52:13.213030 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:13.213072 systemd[1]: kubelet.service: Consumed 76ms CPU time, 97.6M memory peak. Nov 5 15:52:13.214380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:13.339523 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:13.349644 (kubelet)[2506]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:52:13.419918 kubelet[2506]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:13.419918 kubelet[2506]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:52:13.419918 kubelet[2506]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:13.423795 kubelet[2506]: I1105 15:52:13.423700 2506 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:52:14.136507 kubelet[2506]: I1105 15:52:14.136372 2506 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:52:14.136507 kubelet[2506]: I1105 15:52:14.136403 2506 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:52:14.136843 kubelet[2506]: I1105 15:52:14.136627 2506 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:52:14.178479 kubelet[2506]: E1105 15:52:14.178408 2506 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.62.132.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:52:14.182583 kubelet[2506]: I1105 15:52:14.182543 2506 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:52:14.200229 kubelet[2506]: I1105 15:52:14.199124 2506 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:52:14.207541 kubelet[2506]: I1105 15:52:14.207473 2506 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:52:14.210300 kubelet[2506]: I1105 15:52:14.210239 2506 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:52:14.212941 kubelet[2506]: I1105 15:52:14.210280 2506 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487-0-1-1-e8a5680daa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:52:14.212941 kubelet[2506]: I1105 15:52:14.212922 2506 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:52:14.212941 kubelet[2506]: I1105 15:52:14.212934 2506 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:52:14.213842 kubelet[2506]: I1105 15:52:14.213804 2506 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:14.217229 kubelet[2506]: I1105 15:52:14.216662 2506 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:52:14.217229 kubelet[2506]: I1105 15:52:14.216704 2506 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:52:14.219533 kubelet[2506]: I1105 15:52:14.219340 2506 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:52:14.219533 kubelet[2506]: I1105 15:52:14.219367 2506 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:52:14.230235 kubelet[2506]: E1105 15:52:14.230203 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.132.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-1-1-e8a5680daa&limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:52:14.240903 kubelet[2506]: I1105 15:52:14.240857 2506 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:52:14.241700 kubelet[2506]: I1105 15:52:14.241668 2506 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:52:14.243301 kubelet[2506]: W1105 15:52:14.242812 2506 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:52:14.244988 kubelet[2506]: E1105 15:52:14.244958 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.132.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:52:14.253541 kubelet[2506]: I1105 15:52:14.253500 2506 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:52:14.253645 kubelet[2506]: I1105 15:52:14.253594 2506 server.go:1289] "Started kubelet" Nov 5 15:52:14.258479 kubelet[2506]: I1105 15:52:14.258362 2506 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:52:14.270349 kubelet[2506]: E1105 15:52:14.263640 2506 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.62.132.115:6443/api/v1/namespaces/default/events\": dial tcp 46.62.132.115:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4487-0-1-1-e8a5680daa.187527342e45e102 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4487-0-1-1-e8a5680daa,UID:ci-4487-0-1-1-e8a5680daa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4487-0-1-1-e8a5680daa,},FirstTimestamp:2025-11-05 15:52:14.253531394 +0000 UTC m=+0.898792632,LastTimestamp:2025-11-05 15:52:14.253531394 +0000 UTC m=+0.898792632,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487-0-1-1-e8a5680daa,}" Nov 5 15:52:14.272303 kubelet[2506]: I1105 15:52:14.271147 2506 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:52:14.272303 kubelet[2506]: I1105 15:52:14.271385 2506 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:52:14.272303 kubelet[2506]: E1105 15:52:14.271798 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:14.272617 kubelet[2506]: I1105 15:52:14.272572 2506 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:52:14.273335 kubelet[2506]: I1105 15:52:14.273319 2506 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:52:14.274579 kubelet[2506]: I1105 15:52:14.274559 2506 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:52:14.274890 kubelet[2506]: I1105 15:52:14.274870 2506 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:52:14.275444 kubelet[2506]: E1105 15:52:14.275417 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.132.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-1-1-e8a5680daa?timeout=10s\": dial tcp 46.62.132.115:6443: connect: connection refused" interval="200ms" Nov 5 15:52:14.281639 kubelet[2506]: E1105 15:52:14.281553 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.132.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:52:14.282372 kubelet[2506]: I1105 15:52:14.282114 2506 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:52:14.282634 kubelet[2506]: I1105 15:52:14.282603 2506 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:52:14.285518 kubelet[2506]: I1105 15:52:14.285487 2506 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:52:14.287230 kubelet[2506]: I1105 15:52:14.286429 2506 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:52:14.289689 kubelet[2506]: I1105 15:52:14.289636 2506 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:52:14.291786 kubelet[2506]: E1105 15:52:14.291766 2506 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:52:14.307781 kubelet[2506]: I1105 15:52:14.307725 2506 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:52:14.307912 kubelet[2506]: I1105 15:52:14.307769 2506 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:52:14.307912 kubelet[2506]: I1105 15:52:14.307893 2506 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:14.311012 kubelet[2506]: I1105 15:52:14.310980 2506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:52:14.312603 kubelet[2506]: I1105 15:52:14.312587 2506 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:52:14.312750 kubelet[2506]: I1105 15:52:14.312729 2506 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:52:14.316435 kubelet[2506]: I1105 15:52:14.316419 2506 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:52:14.316523 kubelet[2506]: I1105 15:52:14.316516 2506 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:52:14.316628 kubelet[2506]: E1105 15:52:14.316605 2506 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:52:14.320244 kubelet[2506]: I1105 15:52:14.320231 2506 policy_none.go:49] "None policy: Start" Nov 5 15:52:14.320370 kubelet[2506]: I1105 15:52:14.320362 2506 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:52:14.320440 kubelet[2506]: I1105 15:52:14.320434 2506 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:52:14.320997 kubelet[2506]: E1105 15:52:14.320970 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.132.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:52:14.330026 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:52:14.349465 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:52:14.353610 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:52:14.362444 kubelet[2506]: E1105 15:52:14.362352 2506 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:52:14.363564 kubelet[2506]: I1105 15:52:14.363547 2506 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:52:14.363730 kubelet[2506]: I1105 15:52:14.363638 2506 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:52:14.365495 kubelet[2506]: I1105 15:52:14.365469 2506 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:52:14.372307 kubelet[2506]: E1105 15:52:14.372250 2506 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:52:14.372307 kubelet[2506]: E1105 15:52:14.372290 2506 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:14.431330 systemd[1]: Created slice kubepods-burstable-pod5eba335c889c83faf3b939bafa6b1820.slice - libcontainer container kubepods-burstable-pod5eba335c889c83faf3b939bafa6b1820.slice. Nov 5 15:52:14.463072 kubelet[2506]: E1105 15:52:14.462975 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.469889 systemd[1]: Created slice kubepods-burstable-pod7df9101b20f0db1cb231699640574ecd.slice - libcontainer container kubepods-burstable-pod7df9101b20f0db1cb231699640574ecd.slice. Nov 5 15:52:14.470387 kubelet[2506]: I1105 15:52:14.470190 2506 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.474702 kubelet[2506]: E1105 15:52:14.474555 2506 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.132.115:6443/api/v1/nodes\": dial tcp 46.62.132.115:6443: connect: connection refused" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475499 kubelet[2506]: I1105 15:52:14.475159 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-ca-certs\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475499 kubelet[2506]: I1105 15:52:14.475220 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-flexvolume-dir\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475499 kubelet[2506]: I1105 15:52:14.475244 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-k8s-certs\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475499 kubelet[2506]: I1105 15:52:14.475268 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-kubeconfig\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475499 kubelet[2506]: I1105 15:52:14.475290 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475724 kubelet[2506]: I1105 15:52:14.475310 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5eba335c889c83faf3b939bafa6b1820-k8s-certs\") pod \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" (UID: \"5eba335c889c83faf3b939bafa6b1820\") " pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475724 kubelet[2506]: I1105 15:52:14.475330 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5eba335c889c83faf3b939bafa6b1820-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" (UID: \"5eba335c889c83faf3b939bafa6b1820\") " pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475724 kubelet[2506]: I1105 15:52:14.475350 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2fa4634ed0cfe685e7cac03008ceb9b-kubeconfig\") pod \"kube-scheduler-ci-4487-0-1-1-e8a5680daa\" (UID: \"c2fa4634ed0cfe685e7cac03008ceb9b\") " pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.475724 kubelet[2506]: I1105 15:52:14.475369 2506 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5eba335c889c83faf3b939bafa6b1820-ca-certs\") pod \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" (UID: \"5eba335c889c83faf3b939bafa6b1820\") " pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.478195 kubelet[2506]: E1105 15:52:14.477651 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.132.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-1-1-e8a5680daa?timeout=10s\": dial tcp 46.62.132.115:6443: connect: connection refused" interval="400ms" Nov 5 15:52:14.478195 kubelet[2506]: E1105 15:52:14.477770 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.488822 systemd[1]: Created slice kubepods-burstable-podc2fa4634ed0cfe685e7cac03008ceb9b.slice - libcontainer container kubepods-burstable-podc2fa4634ed0cfe685e7cac03008ceb9b.slice. Nov 5 15:52:14.491190 kubelet[2506]: E1105 15:52:14.491118 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.677686 kubelet[2506]: I1105 15:52:14.677625 2506 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.678414 kubelet[2506]: E1105 15:52:14.678309 2506 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.132.115:6443/api/v1/nodes\": dial tcp 46.62.132.115:6443: connect: connection refused" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:14.765633 containerd[1640]: time="2025-11-05T15:52:14.765478441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487-0-1-1-e8a5680daa,Uid:5eba335c889c83faf3b939bafa6b1820,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:14.778888 containerd[1640]: time="2025-11-05T15:52:14.778813543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487-0-1-1-e8a5680daa,Uid:7df9101b20f0db1cb231699640574ecd,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:14.792670 containerd[1640]: time="2025-11-05T15:52:14.792606536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487-0-1-1-e8a5680daa,Uid:c2fa4634ed0cfe685e7cac03008ceb9b,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:14.879746 kubelet[2506]: E1105 15:52:14.879681 2506 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.62.132.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-1-1-e8a5680daa?timeout=10s\": dial tcp 46.62.132.115:6443: connect: connection refused" interval="800ms" Nov 5 15:52:14.939225 containerd[1640]: time="2025-11-05T15:52:14.939070640Z" level=info msg="connecting to shim 1f94e6d3b20ee574531ab5602e8bc24c806a468a4b689d5eb608943feb10d85c" address="unix:///run/containerd/s/f0298822dd6d2fe72078e886bf543b6a15a878055b21c5d5865d44fa0faa8969" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:14.945051 containerd[1640]: time="2025-11-05T15:52:14.944953588Z" level=info msg="connecting to shim 7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf" address="unix:///run/containerd/s/e63a1cfb7755e2150580dd85a3a07c16c3cccdd9ee379b4d9ef5acdc7f94fdd9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:14.950478 containerd[1640]: time="2025-11-05T15:52:14.950351846Z" level=info msg="connecting to shim a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d" address="unix:///run/containerd/s/7436c349380b6cb1590543613e49ffa4ad12531386fcb717cf4903382541bdd8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:15.061320 kubelet[2506]: E1105 15:52:15.060792 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.62.132.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4487-0-1-1-e8a5680daa&limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:52:15.075584 systemd[1]: Started cri-containerd-1f94e6d3b20ee574531ab5602e8bc24c806a468a4b689d5eb608943feb10d85c.scope - libcontainer container 1f94e6d3b20ee574531ab5602e8bc24c806a468a4b689d5eb608943feb10d85c. Nov 5 15:52:15.081938 kubelet[2506]: I1105 15:52:15.081765 2506 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:15.082539 kubelet[2506]: E1105 15:52:15.082386 2506 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.62.132.115:6443/api/v1/nodes\": dial tcp 46.62.132.115:6443: connect: connection refused" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:15.089920 systemd[1]: Started cri-containerd-7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf.scope - libcontainer container 7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf. Nov 5 15:52:15.095131 systemd[1]: Started cri-containerd-a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d.scope - libcontainer container a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d. Nov 5 15:52:15.201892 containerd[1640]: time="2025-11-05T15:52:15.201843042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4487-0-1-1-e8a5680daa,Uid:7df9101b20f0db1cb231699640574ecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d\"" Nov 5 15:52:15.203634 containerd[1640]: time="2025-11-05T15:52:15.203348016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4487-0-1-1-e8a5680daa,Uid:c2fa4634ed0cfe685e7cac03008ceb9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf\"" Nov 5 15:52:15.208595 containerd[1640]: time="2025-11-05T15:52:15.208564014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4487-0-1-1-e8a5680daa,Uid:5eba335c889c83faf3b939bafa6b1820,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f94e6d3b20ee574531ab5602e8bc24c806a468a4b689d5eb608943feb10d85c\"" Nov 5 15:52:15.209307 containerd[1640]: time="2025-11-05T15:52:15.209103419Z" level=info msg="CreateContainer within sandbox \"a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:52:15.210275 containerd[1640]: time="2025-11-05T15:52:15.210224891Z" level=info msg="CreateContainer within sandbox \"7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:52:15.212532 containerd[1640]: time="2025-11-05T15:52:15.212488208Z" level=info msg="CreateContainer within sandbox \"1f94e6d3b20ee574531ab5602e8bc24c806a468a4b689d5eb608943feb10d85c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:52:15.220127 containerd[1640]: time="2025-11-05T15:52:15.220076601Z" level=info msg="Container 2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:15.225852 containerd[1640]: time="2025-11-05T15:52:15.225710654Z" level=info msg="Container 35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:15.228626 containerd[1640]: time="2025-11-05T15:52:15.228606022Z" level=info msg="Container 24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:15.233996 containerd[1640]: time="2025-11-05T15:52:15.233963454Z" level=info msg="CreateContainer within sandbox \"a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\"" Nov 5 15:52:15.234922 containerd[1640]: time="2025-11-05T15:52:15.234879809Z" level=info msg="StartContainer for \"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\"" Nov 5 15:52:15.235709 containerd[1640]: time="2025-11-05T15:52:15.235640361Z" level=info msg="connecting to shim 2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1" address="unix:///run/containerd/s/7436c349380b6cb1590543613e49ffa4ad12531386fcb717cf4903382541bdd8" protocol=ttrpc version=3 Nov 5 15:52:15.237451 containerd[1640]: time="2025-11-05T15:52:15.237412957Z" level=info msg="CreateContainer within sandbox \"1f94e6d3b20ee574531ab5602e8bc24c806a468a4b689d5eb608943feb10d85c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c\"" Nov 5 15:52:15.238263 containerd[1640]: time="2025-11-05T15:52:15.238228004Z" level=info msg="StartContainer for \"24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c\"" Nov 5 15:52:15.238968 containerd[1640]: time="2025-11-05T15:52:15.238767029Z" level=info msg="CreateContainer within sandbox \"7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\"" Nov 5 15:52:15.239520 containerd[1640]: time="2025-11-05T15:52:15.239496488Z" level=info msg="connecting to shim 24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c" address="unix:///run/containerd/s/f0298822dd6d2fe72078e886bf543b6a15a878055b21c5d5865d44fa0faa8969" protocol=ttrpc version=3 Nov 5 15:52:15.239962 containerd[1640]: time="2025-11-05T15:52:15.239943115Z" level=info msg="StartContainer for \"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\"" Nov 5 15:52:15.242184 containerd[1640]: time="2025-11-05T15:52:15.241295296Z" level=info msg="connecting to shim 35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954" address="unix:///run/containerd/s/e63a1cfb7755e2150580dd85a3a07c16c3cccdd9ee379b4d9ef5acdc7f94fdd9" protocol=ttrpc version=3 Nov 5 15:52:15.257330 systemd[1]: Started cri-containerd-2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1.scope - libcontainer container 2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1. Nov 5 15:52:15.259993 systemd[1]: Started cri-containerd-24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c.scope - libcontainer container 24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c. Nov 5 15:52:15.265273 systemd[1]: Started cri-containerd-35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954.scope - libcontainer container 35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954. Nov 5 15:52:15.266893 kubelet[2506]: E1105 15:52:15.266865 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.62.132.115:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:52:15.332136 containerd[1640]: time="2025-11-05T15:52:15.332104394Z" level=info msg="StartContainer for \"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\" returns successfully" Nov 5 15:52:15.333181 containerd[1640]: time="2025-11-05T15:52:15.333080464Z" level=info msg="StartContainer for \"24fa89eb4c5950d21d7a3a1b63454adf53f240866d3f30d5524bf2a73bb6507c\" returns successfully" Nov 5 15:52:15.344537 kubelet[2506]: E1105 15:52:15.344377 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:15.349254 kubelet[2506]: E1105 15:52:15.349221 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.62.132.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:52:15.352791 containerd[1640]: time="2025-11-05T15:52:15.352754492Z" level=info msg="StartContainer for \"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\" returns successfully" Nov 5 15:52:15.393393 kubelet[2506]: E1105 15:52:15.393344 2506 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.62.132.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.62.132.115:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:52:15.883956 kubelet[2506]: I1105 15:52:15.883917 2506 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:16.346996 kubelet[2506]: E1105 15:52:16.346970 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:16.347393 kubelet[2506]: E1105 15:52:16.347378 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:16.347668 kubelet[2506]: E1105 15:52:16.347653 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:16.892188 kubelet[2506]: E1105 15:52:16.891420 2506 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.032156 kubelet[2506]: I1105 15:52:17.032113 2506 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.032156 kubelet[2506]: E1105 15:52:17.032156 2506 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4487-0-1-1-e8a5680daa\": node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:17.055704 kubelet[2506]: E1105 15:52:17.055623 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:17.156774 kubelet[2506]: E1105 15:52:17.156634 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:17.257328 kubelet[2506]: E1105 15:52:17.257249 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:17.351660 kubelet[2506]: E1105 15:52:17.351599 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.352372 kubelet[2506]: E1105 15:52:17.352338 2506 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.358356 kubelet[2506]: E1105 15:52:17.358290 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:17.459614 kubelet[2506]: E1105 15:52:17.459443 2506 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4487-0-1-1-e8a5680daa\" not found" Nov 5 15:52:17.473121 kubelet[2506]: I1105 15:52:17.473047 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.479945 kubelet[2506]: E1105 15:52:17.479874 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.479945 kubelet[2506]: I1105 15:52:17.479913 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.481709 kubelet[2506]: E1105 15:52:17.481671 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.481709 kubelet[2506]: I1105 15:52:17.481698 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.482940 kubelet[2506]: E1105 15:52:17.482919 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487-0-1-1-e8a5680daa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.594876 kubelet[2506]: I1105 15:52:17.594816 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:17.597111 kubelet[2506]: E1105 15:52:17.597070 2506 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:18.234464 kubelet[2506]: I1105 15:52:18.234410 2506 apiserver.go:52] "Watching apiserver" Nov 5 15:52:18.273613 kubelet[2506]: I1105 15:52:18.273533 2506 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:52:18.350402 kubelet[2506]: I1105 15:52:18.350361 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:18.762504 kubelet[2506]: I1105 15:52:18.762452 2506 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:18.816117 systemd[1]: Reload requested from client PID 2785 ('systemctl') (unit session-7.scope)... Nov 5 15:52:18.816131 systemd[1]: Reloading... Nov 5 15:52:18.932223 zram_generator::config[2826]: No configuration found. Nov 5 15:52:19.112544 systemd[1]: Reloading finished in 296 ms. Nov 5 15:52:19.140935 kubelet[2506]: I1105 15:52:19.140872 2506 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:52:19.142587 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:19.161702 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:52:19.163222 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:19.163443 systemd[1]: kubelet.service: Consumed 1.239s CPU time, 130.6M memory peak. Nov 5 15:52:19.166036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:19.326414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:19.341036 (kubelet)[2881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:52:19.404375 kubelet[2881]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:19.406215 kubelet[2881]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:52:19.406215 kubelet[2881]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:52:19.406215 kubelet[2881]: I1105 15:52:19.404990 2881 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:52:19.416157 kubelet[2881]: I1105 15:52:19.416110 2881 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:52:19.416350 kubelet[2881]: I1105 15:52:19.416339 2881 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:52:19.416780 kubelet[2881]: I1105 15:52:19.416765 2881 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:52:19.420625 kubelet[2881]: I1105 15:52:19.420581 2881 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:52:19.427967 kubelet[2881]: I1105 15:52:19.427941 2881 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:52:19.432271 kubelet[2881]: I1105 15:52:19.432241 2881 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:52:19.437352 kubelet[2881]: I1105 15:52:19.437325 2881 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:52:19.438963 kubelet[2881]: I1105 15:52:19.438923 2881 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:52:19.439281 kubelet[2881]: I1105 15:52:19.439078 2881 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4487-0-1-1-e8a5680daa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:52:19.440705 kubelet[2881]: I1105 15:52:19.440468 2881 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:52:19.440705 kubelet[2881]: I1105 15:52:19.440495 2881 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:52:19.440705 kubelet[2881]: I1105 15:52:19.440552 2881 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:19.440875 kubelet[2881]: I1105 15:52:19.440864 2881 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:52:19.441525 kubelet[2881]: I1105 15:52:19.441411 2881 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:52:19.441641 kubelet[2881]: I1105 15:52:19.441613 2881 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:52:19.441726 kubelet[2881]: I1105 15:52:19.441718 2881 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:52:19.454277 kubelet[2881]: I1105 15:52:19.454231 2881 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:52:19.456469 kubelet[2881]: I1105 15:52:19.456436 2881 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:52:19.463105 kubelet[2881]: I1105 15:52:19.463077 2881 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:52:19.463353 kubelet[2881]: I1105 15:52:19.463342 2881 server.go:1289] "Started kubelet" Nov 5 15:52:19.464735 kubelet[2881]: I1105 15:52:19.464720 2881 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:52:19.473447 kubelet[2881]: I1105 15:52:19.473395 2881 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:52:19.477056 kubelet[2881]: I1105 15:52:19.476995 2881 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:52:19.478819 kubelet[2881]: I1105 15:52:19.478802 2881 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:52:19.485106 kubelet[2881]: I1105 15:52:19.485051 2881 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:52:19.486282 kubelet[2881]: I1105 15:52:19.485409 2881 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:52:19.486282 kubelet[2881]: I1105 15:52:19.485632 2881 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:52:19.486372 kubelet[2881]: I1105 15:52:19.486351 2881 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:52:19.486924 kubelet[2881]: I1105 15:52:19.486873 2881 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:52:19.487036 kubelet[2881]: I1105 15:52:19.487019 2881 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:52:19.497300 kubelet[2881]: E1105 15:52:19.497266 2881 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:52:19.498387 kubelet[2881]: I1105 15:52:19.498369 2881 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:52:19.499456 kubelet[2881]: I1105 15:52:19.499439 2881 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:52:19.499746 kubelet[2881]: I1105 15:52:19.499734 2881 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:52:19.499818 kubelet[2881]: I1105 15:52:19.499811 2881 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:52:19.499906 kubelet[2881]: E1105 15:52:19.499890 2881 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:52:19.504270 kubelet[2881]: I1105 15:52:19.504124 2881 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:52:19.504270 kubelet[2881]: I1105 15:52:19.504275 2881 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:52:19.505445 kubelet[2881]: I1105 15:52:19.505408 2881 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:52:19.548365 kubelet[2881]: I1105 15:52:19.548347 2881 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:52:19.548680 kubelet[2881]: I1105 15:52:19.548533 2881 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:52:19.548680 kubelet[2881]: I1105 15:52:19.548602 2881 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:52:19.548806 kubelet[2881]: I1105 15:52:19.548798 2881 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:52:19.548862 kubelet[2881]: I1105 15:52:19.548848 2881 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:52:19.548888 kubelet[2881]: I1105 15:52:19.548885 2881 policy_none.go:49] "None policy: Start" Nov 5 15:52:19.548922 kubelet[2881]: I1105 15:52:19.548910 2881 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:52:19.548953 kubelet[2881]: I1105 15:52:19.548950 2881 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:52:19.549100 kubelet[2881]: I1105 15:52:19.549061 2881 state_mem.go:75] "Updated machine memory state" Nov 5 15:52:19.552468 kubelet[2881]: E1105 15:52:19.552456 2881 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:52:19.553049 kubelet[2881]: I1105 15:52:19.553039 2881 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:52:19.553323 kubelet[2881]: I1105 15:52:19.553279 2881 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:52:19.553967 kubelet[2881]: I1105 15:52:19.553772 2881 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:52:19.561313 kubelet[2881]: E1105 15:52:19.561294 2881 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:52:19.601801 kubelet[2881]: I1105 15:52:19.601341 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.601801 kubelet[2881]: I1105 15:52:19.601485 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.601801 kubelet[2881]: I1105 15:52:19.601678 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.612205 kubelet[2881]: E1105 15:52:19.612114 2881 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" already exists" pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.612364 kubelet[2881]: E1105 15:52:19.612279 2881 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487-0-1-1-e8a5680daa\" already exists" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.668044 kubelet[2881]: I1105 15:52:19.665848 2881 kubelet_node_status.go:75] "Attempting to register node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.679563 kubelet[2881]: I1105 15:52:19.679524 2881 kubelet_node_status.go:124] "Node was previously registered" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.679983 kubelet[2881]: I1105 15:52:19.679934 2881 kubelet_node_status.go:78] "Successfully registered node" node="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789375 kubelet[2881]: I1105 15:52:19.789328 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5eba335c889c83faf3b939bafa6b1820-ca-certs\") pod \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" (UID: \"5eba335c889c83faf3b939bafa6b1820\") " pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789375 kubelet[2881]: I1105 15:52:19.789379 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5eba335c889c83faf3b939bafa6b1820-k8s-certs\") pod \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" (UID: \"5eba335c889c83faf3b939bafa6b1820\") " pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789375 kubelet[2881]: I1105 15:52:19.789404 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5eba335c889c83faf3b939bafa6b1820-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4487-0-1-1-e8a5680daa\" (UID: \"5eba335c889c83faf3b939bafa6b1820\") " pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789760 kubelet[2881]: I1105 15:52:19.789424 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-ca-certs\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789760 kubelet[2881]: I1105 15:52:19.789442 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-kubeconfig\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789760 kubelet[2881]: I1105 15:52:19.789459 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789760 kubelet[2881]: I1105 15:52:19.789477 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c2fa4634ed0cfe685e7cac03008ceb9b-kubeconfig\") pod \"kube-scheduler-ci-4487-0-1-1-e8a5680daa\" (UID: \"c2fa4634ed0cfe685e7cac03008ceb9b\") " pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789760 kubelet[2881]: I1105 15:52:19.789494 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-flexvolume-dir\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:19.789929 kubelet[2881]: I1105 15:52:19.789511 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7df9101b20f0db1cb231699640574ecd-k8s-certs\") pod \"kube-controller-manager-ci-4487-0-1-1-e8a5680daa\" (UID: \"7df9101b20f0db1cb231699640574ecd\") " pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:20.447932 kubelet[2881]: I1105 15:52:20.447887 2881 apiserver.go:52] "Watching apiserver" Nov 5 15:52:20.487938 kubelet[2881]: I1105 15:52:20.487882 2881 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:52:20.492647 kubelet[2881]: I1105 15:52:20.492582 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" podStartSLOduration=2.492568432 podStartE2EDuration="2.492568432s" podCreationTimestamp="2025-11-05 15:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:20.492459935 +0000 UTC m=+1.142671626" watchObservedRunningTime="2025-11-05 15:52:20.492568432 +0000 UTC m=+1.142780113" Nov 5 15:52:20.510484 kubelet[2881]: I1105 15:52:20.509222 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4487-0-1-1-e8a5680daa" podStartSLOduration=1.509184638 podStartE2EDuration="1.509184638s" podCreationTimestamp="2025-11-05 15:52:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:20.501544972 +0000 UTC m=+1.151756653" watchObservedRunningTime="2025-11-05 15:52:20.509184638 +0000 UTC m=+1.159396329" Nov 5 15:52:20.522680 kubelet[2881]: I1105 15:52:20.522467 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4487-0-1-1-e8a5680daa" podStartSLOduration=2.5224451549999998 podStartE2EDuration="2.522445155s" podCreationTimestamp="2025-11-05 15:52:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:20.511294003 +0000 UTC m=+1.161505684" watchObservedRunningTime="2025-11-05 15:52:20.522445155 +0000 UTC m=+1.172656846" Nov 5 15:52:20.536132 kubelet[2881]: I1105 15:52:20.536088 2881 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:20.545188 kubelet[2881]: E1105 15:52:20.545009 2881 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4487-0-1-1-e8a5680daa\" already exists" pod="kube-system/kube-scheduler-ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:23.935641 kubelet[2881]: I1105 15:52:23.935605 2881 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:52:23.936237 kubelet[2881]: I1105 15:52:23.936025 2881 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:52:23.936273 containerd[1640]: time="2025-11-05T15:52:23.935879860Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:52:24.257425 systemd[1]: Started sshd@7-46.62.132.115:22-222.255.250.41:57529.service - OpenSSH per-connection server daemon (222.255.250.41:57529). Nov 5 15:52:24.482523 systemd[1]: Created slice kubepods-besteffort-pod30797e72_4254_48dc_9784_56c665ca85c5.slice - libcontainer container kubepods-besteffort-pod30797e72_4254_48dc_9784_56c665ca85c5.slice. Nov 5 15:52:24.520712 kubelet[2881]: I1105 15:52:24.520462 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/30797e72-4254-48dc-9784-56c665ca85c5-kube-proxy\") pod \"kube-proxy-qqlwm\" (UID: \"30797e72-4254-48dc-9784-56c665ca85c5\") " pod="kube-system/kube-proxy-qqlwm" Nov 5 15:52:24.520712 kubelet[2881]: I1105 15:52:24.520503 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30797e72-4254-48dc-9784-56c665ca85c5-xtables-lock\") pod \"kube-proxy-qqlwm\" (UID: \"30797e72-4254-48dc-9784-56c665ca85c5\") " pod="kube-system/kube-proxy-qqlwm" Nov 5 15:52:24.520712 kubelet[2881]: I1105 15:52:24.520522 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30797e72-4254-48dc-9784-56c665ca85c5-lib-modules\") pod \"kube-proxy-qqlwm\" (UID: \"30797e72-4254-48dc-9784-56c665ca85c5\") " pod="kube-system/kube-proxy-qqlwm" Nov 5 15:52:24.520712 kubelet[2881]: I1105 15:52:24.520539 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2874v\" (UniqueName: \"kubernetes.io/projected/30797e72-4254-48dc-9784-56c665ca85c5-kube-api-access-2874v\") pod \"kube-proxy-qqlwm\" (UID: \"30797e72-4254-48dc-9784-56c665ca85c5\") " pod="kube-system/kube-proxy-qqlwm" Nov 5 15:52:24.631800 kubelet[2881]: E1105 15:52:24.631769 2881 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 15:52:24.632191 kubelet[2881]: E1105 15:52:24.631956 2881 projected.go:194] Error preparing data for projected volume kube-api-access-2874v for pod kube-system/kube-proxy-qqlwm: configmap "kube-root-ca.crt" not found Nov 5 15:52:24.632191 kubelet[2881]: E1105 15:52:24.632025 2881 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30797e72-4254-48dc-9784-56c665ca85c5-kube-api-access-2874v podName:30797e72-4254-48dc-9784-56c665ca85c5 nodeName:}" failed. No retries permitted until 2025-11-05 15:52:25.132001527 +0000 UTC m=+5.782213218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2874v" (UniqueName: "kubernetes.io/projected/30797e72-4254-48dc-9784-56c665ca85c5-kube-api-access-2874v") pod "kube-proxy-qqlwm" (UID: "30797e72-4254-48dc-9784-56c665ca85c5") : configmap "kube-root-ca.crt" not found Nov 5 15:52:25.171985 systemd[1]: Created slice kubepods-besteffort-podda2bd9fa_a59a_4ace_9403_44ba69a14210.slice - libcontainer container kubepods-besteffort-podda2bd9fa_a59a_4ace_9403_44ba69a14210.slice. Nov 5 15:52:25.227211 kubelet[2881]: I1105 15:52:25.226824 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvgk\" (UniqueName: \"kubernetes.io/projected/da2bd9fa-a59a-4ace-9403-44ba69a14210-kube-api-access-xkvgk\") pod \"tigera-operator-7dcd859c48-ck95l\" (UID: \"da2bd9fa-a59a-4ace-9403-44ba69a14210\") " pod="tigera-operator/tigera-operator-7dcd859c48-ck95l" Nov 5 15:52:25.227211 kubelet[2881]: I1105 15:52:25.226869 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/da2bd9fa-a59a-4ace-9403-44ba69a14210-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ck95l\" (UID: \"da2bd9fa-a59a-4ace-9403-44ba69a14210\") " pod="tigera-operator/tigera-operator-7dcd859c48-ck95l" Nov 5 15:52:25.394268 containerd[1640]: time="2025-11-05T15:52:25.394225225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qqlwm,Uid:30797e72-4254-48dc-9784-56c665ca85c5,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:25.411619 containerd[1640]: time="2025-11-05T15:52:25.411539646Z" level=info msg="connecting to shim 7e2140e04b1daf9c5303917f5c5efc0367ddd47a084449944c00e3c3175f1edd" address="unix:///run/containerd/s/f884b3e6ea342db2851b79ff54333684e04c68bff80afaec7b34fdf33a7e69ad" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:25.439415 systemd[1]: Started cri-containerd-7e2140e04b1daf9c5303917f5c5efc0367ddd47a084449944c00e3c3175f1edd.scope - libcontainer container 7e2140e04b1daf9c5303917f5c5efc0367ddd47a084449944c00e3c3175f1edd. Nov 5 15:52:25.462974 containerd[1640]: time="2025-11-05T15:52:25.462928882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qqlwm,Uid:30797e72-4254-48dc-9784-56c665ca85c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e2140e04b1daf9c5303917f5c5efc0367ddd47a084449944c00e3c3175f1edd\"" Nov 5 15:52:25.465656 sshd[2928]: Connection closed by authenticating user root 222.255.250.41 port 57529 [preauth] Nov 5 15:52:25.468795 containerd[1640]: time="2025-11-05T15:52:25.468760185Z" level=info msg="CreateContainer within sandbox \"7e2140e04b1daf9c5303917f5c5efc0367ddd47a084449944c00e3c3175f1edd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:52:25.469022 systemd[1]: sshd@7-46.62.132.115:22-222.255.250.41:57529.service: Deactivated successfully. Nov 5 15:52:25.476464 containerd[1640]: time="2025-11-05T15:52:25.476422344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ck95l,Uid:da2bd9fa-a59a-4ace-9403-44ba69a14210,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:52:25.481289 containerd[1640]: time="2025-11-05T15:52:25.481250765Z" level=info msg="Container 9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:25.491187 containerd[1640]: time="2025-11-05T15:52:25.490556270Z" level=info msg="CreateContainer within sandbox \"7e2140e04b1daf9c5303917f5c5efc0367ddd47a084449944c00e3c3175f1edd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159\"" Nov 5 15:52:25.491705 containerd[1640]: time="2025-11-05T15:52:25.491547472Z" level=info msg="StartContainer for \"9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159\"" Nov 5 15:52:25.492785 containerd[1640]: time="2025-11-05T15:52:25.492764025Z" level=info msg="connecting to shim 9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159" address="unix:///run/containerd/s/f884b3e6ea342db2851b79ff54333684e04c68bff80afaec7b34fdf33a7e69ad" protocol=ttrpc version=3 Nov 5 15:52:25.505745 containerd[1640]: time="2025-11-05T15:52:25.505696108Z" level=info msg="connecting to shim 98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815" address="unix:///run/containerd/s/57a835a54775c32ed12bf97f13bc62978918a4943649c9f237c276734d476b53" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:25.514325 systemd[1]: Started cri-containerd-9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159.scope - libcontainer container 9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159. Nov 5 15:52:25.529387 systemd[1]: Started cri-containerd-98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815.scope - libcontainer container 98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815. Nov 5 15:52:25.560444 containerd[1640]: time="2025-11-05T15:52:25.560403156Z" level=info msg="StartContainer for \"9166e9f29004ceadc04795992fae98b517b97e101b805c21c3e79b700e09e159\" returns successfully" Nov 5 15:52:25.595445 containerd[1640]: time="2025-11-05T15:52:25.595385437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ck95l,Uid:da2bd9fa-a59a-4ace-9403-44ba69a14210,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815\"" Nov 5 15:52:25.599942 containerd[1640]: time="2025-11-05T15:52:25.599718563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:52:26.234993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258299480.mount: Deactivated successfully. Nov 5 15:52:26.559700 kubelet[2881]: I1105 15:52:26.559251 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qqlwm" podStartSLOduration=2.559234452 podStartE2EDuration="2.559234452s" podCreationTimestamp="2025-11-05 15:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:52:26.558144918 +0000 UTC m=+7.208356599" watchObservedRunningTime="2025-11-05 15:52:26.559234452 +0000 UTC m=+7.209446143" Nov 5 15:52:27.752330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67922515.mount: Deactivated successfully. Nov 5 15:52:28.501399 containerd[1640]: time="2025-11-05T15:52:28.500888855Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:28.502328 containerd[1640]: time="2025-11-05T15:52:28.502267458Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:52:28.503190 containerd[1640]: time="2025-11-05T15:52:28.503052295Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:28.504774 containerd[1640]: time="2025-11-05T15:52:28.504740562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:28.505941 containerd[1640]: time="2025-11-05T15:52:28.505518399Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.905567063s" Nov 5 15:52:28.505941 containerd[1640]: time="2025-11-05T15:52:28.505546230Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:52:28.512027 containerd[1640]: time="2025-11-05T15:52:28.511981546Z" level=info msg="CreateContainer within sandbox \"98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:52:28.523334 containerd[1640]: time="2025-11-05T15:52:28.522831878Z" level=info msg="Container 115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:28.532413 containerd[1640]: time="2025-11-05T15:52:28.532358058Z" level=info msg="CreateContainer within sandbox \"98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\"" Nov 5 15:52:28.533338 containerd[1640]: time="2025-11-05T15:52:28.532944695Z" level=info msg="StartContainer for \"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\"" Nov 5 15:52:28.534538 containerd[1640]: time="2025-11-05T15:52:28.534303248Z" level=info msg="connecting to shim 115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25" address="unix:///run/containerd/s/57a835a54775c32ed12bf97f13bc62978918a4943649c9f237c276734d476b53" protocol=ttrpc version=3 Nov 5 15:52:28.558522 systemd[1]: Started cri-containerd-115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25.scope - libcontainer container 115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25. Nov 5 15:52:28.584234 containerd[1640]: time="2025-11-05T15:52:28.584188682Z" level=info msg="StartContainer for \"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\" returns successfully" Nov 5 15:52:29.589481 kubelet[2881]: I1105 15:52:29.588553 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ck95l" podStartSLOduration=1.680485681 podStartE2EDuration="4.588532516s" podCreationTimestamp="2025-11-05 15:52:25 +0000 UTC" firstStartedPulling="2025-11-05 15:52:25.598680739 +0000 UTC m=+6.248892430" lastFinishedPulling="2025-11-05 15:52:28.506727574 +0000 UTC m=+9.156939265" observedRunningTime="2025-11-05 15:52:29.585986463 +0000 UTC m=+10.236198184" watchObservedRunningTime="2025-11-05 15:52:29.588532516 +0000 UTC m=+10.238744237" Nov 5 15:52:34.668424 sudo[1951]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:34.849720 sshd[1950]: Connection closed by 139.178.68.195 port 37204 Nov 5 15:52:34.851529 sshd-session[1932]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:34.854656 systemd-logind[1606]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:52:34.856341 systemd[1]: sshd@6-46.62.132.115:22-139.178.68.195:37204.service: Deactivated successfully. Nov 5 15:52:34.858338 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:52:34.858659 systemd[1]: session-7.scope: Consumed 5.315s CPU time, 154.7M memory peak. Nov 5 15:52:34.860728 systemd-logind[1606]: Removed session 7. Nov 5 15:52:38.869894 systemd[1]: Created slice kubepods-besteffort-podc1358978_776f_4c3e_9612_a035d6d7f5c1.slice - libcontainer container kubepods-besteffort-podc1358978_776f_4c3e_9612_a035d6d7f5c1.slice. Nov 5 15:52:38.917895 kubelet[2881]: I1105 15:52:38.917777 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1358978-776f-4c3e-9612-a035d6d7f5c1-tigera-ca-bundle\") pod \"calico-typha-fcff8b9-f2lx5\" (UID: \"c1358978-776f-4c3e-9612-a035d6d7f5c1\") " pod="calico-system/calico-typha-fcff8b9-f2lx5" Nov 5 15:52:38.917895 kubelet[2881]: I1105 15:52:38.917821 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6b9\" (UniqueName: \"kubernetes.io/projected/c1358978-776f-4c3e-9612-a035d6d7f5c1-kube-api-access-fm6b9\") pod \"calico-typha-fcff8b9-f2lx5\" (UID: \"c1358978-776f-4c3e-9612-a035d6d7f5c1\") " pod="calico-system/calico-typha-fcff8b9-f2lx5" Nov 5 15:52:38.917895 kubelet[2881]: I1105 15:52:38.917835 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c1358978-776f-4c3e-9612-a035d6d7f5c1-typha-certs\") pod \"calico-typha-fcff8b9-f2lx5\" (UID: \"c1358978-776f-4c3e-9612-a035d6d7f5c1\") " pod="calico-system/calico-typha-fcff8b9-f2lx5" Nov 5 15:52:39.014028 systemd[1]: Created slice kubepods-besteffort-pode4a447dd_4e16_4350_a305_c7978a14c966.slice - libcontainer container kubepods-besteffort-pode4a447dd_4e16_4350_a305_c7978a14c966.slice. Nov 5 15:52:39.120480 kubelet[2881]: I1105 15:52:39.120395 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-policysync\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120480 kubelet[2881]: I1105 15:52:39.120432 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-lib-modules\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120682 kubelet[2881]: I1105 15:52:39.120517 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4a447dd-4e16-4350-a305-c7978a14c966-tigera-ca-bundle\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120682 kubelet[2881]: I1105 15:52:39.120579 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-flexvol-driver-host\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120682 kubelet[2881]: I1105 15:52:39.120602 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-var-run-calico\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120682 kubelet[2881]: I1105 15:52:39.120613 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4a447dd-4e16-4350-a305-c7978a14c966-node-certs\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120682 kubelet[2881]: I1105 15:52:39.120624 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fh57\" (UniqueName: \"kubernetes.io/projected/e4a447dd-4e16-4350-a305-c7978a14c966-kube-api-access-8fh57\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120762 kubelet[2881]: I1105 15:52:39.120637 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-cni-net-dir\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120762 kubelet[2881]: I1105 15:52:39.120656 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-var-lib-calico\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120762 kubelet[2881]: I1105 15:52:39.120666 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-xtables-lock\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120762 kubelet[2881]: I1105 15:52:39.120675 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-cni-bin-dir\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.120762 kubelet[2881]: I1105 15:52:39.120684 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4a447dd-4e16-4350-a305-c7978a14c966-cni-log-dir\") pod \"calico-node-zbtz2\" (UID: \"e4a447dd-4e16-4350-a305-c7978a14c966\") " pod="calico-system/calico-node-zbtz2" Nov 5 15:52:39.181342 containerd[1640]: time="2025-11-05T15:52:39.181272240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fcff8b9-f2lx5,Uid:c1358978-776f-4c3e-9612-a035d6d7f5c1,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:39.208014 containerd[1640]: time="2025-11-05T15:52:39.207859397Z" level=info msg="connecting to shim 338a392e687e4a6a60b9d762d4ebb19693881175b11ea5ba0d1f2a9e4871d361" address="unix:///run/containerd/s/c42409fd13404ce21f41a7200dce406f4d2efe8750b497f3b834126c54416056" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:39.220202 kubelet[2881]: E1105 15:52:39.218324 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:52:39.238808 kubelet[2881]: E1105 15:52:39.238776 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.238808 kubelet[2881]: W1105 15:52:39.238802 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.240778 kubelet[2881]: E1105 15:52:39.240607 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.246348 kubelet[2881]: E1105 15:52:39.246244 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.246503 kubelet[2881]: W1105 15:52:39.246480 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.251575 kubelet[2881]: E1105 15:52:39.251544 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.261427 systemd[1]: Started cri-containerd-338a392e687e4a6a60b9d762d4ebb19693881175b11ea5ba0d1f2a9e4871d361.scope - libcontainer container 338a392e687e4a6a60b9d762d4ebb19693881175b11ea5ba0d1f2a9e4871d361. Nov 5 15:52:39.297255 containerd[1640]: time="2025-11-05T15:52:39.297214710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fcff8b9-f2lx5,Uid:c1358978-776f-4c3e-9612-a035d6d7f5c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"338a392e687e4a6a60b9d762d4ebb19693881175b11ea5ba0d1f2a9e4871d361\"" Nov 5 15:52:39.299096 containerd[1640]: time="2025-11-05T15:52:39.299055779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:52:39.300507 kubelet[2881]: E1105 15:52:39.300480 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.300507 kubelet[2881]: W1105 15:52:39.300497 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.300605 kubelet[2881]: E1105 15:52:39.300512 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.300726 kubelet[2881]: E1105 15:52:39.300712 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.300726 kubelet[2881]: W1105 15:52:39.300723 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.300765 kubelet[2881]: E1105 15:52:39.300731 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.301053 kubelet[2881]: E1105 15:52:39.301018 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.301053 kubelet[2881]: W1105 15:52:39.301029 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.301053 kubelet[2881]: E1105 15:52:39.301036 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.301932 kubelet[2881]: E1105 15:52:39.301917 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.301932 kubelet[2881]: W1105 15:52:39.301927 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.301932 kubelet[2881]: E1105 15:52:39.301935 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.302371 kubelet[2881]: E1105 15:52:39.302346 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.302371 kubelet[2881]: W1105 15:52:39.302358 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.302642 kubelet[2881]: E1105 15:52:39.302463 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.302642 kubelet[2881]: E1105 15:52:39.302597 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.302642 kubelet[2881]: W1105 15:52:39.302603 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.302642 kubelet[2881]: E1105 15:52:39.302608 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.302928 kubelet[2881]: E1105 15:52:39.302906 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.302928 kubelet[2881]: W1105 15:52:39.302917 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.302928 kubelet[2881]: E1105 15:52:39.302924 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.303146 kubelet[2881]: E1105 15:52:39.303125 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.303146 kubelet[2881]: W1105 15:52:39.303136 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.303146 kubelet[2881]: E1105 15:52:39.303144 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.303532 kubelet[2881]: E1105 15:52:39.303491 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.303532 kubelet[2881]: W1105 15:52:39.303505 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.303532 kubelet[2881]: E1105 15:52:39.303512 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.303619 kubelet[2881]: E1105 15:52:39.303595 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.303619 kubelet[2881]: W1105 15:52:39.303602 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.303619 kubelet[2881]: E1105 15:52:39.303607 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.304518 kubelet[2881]: E1105 15:52:39.304471 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.304609 kubelet[2881]: W1105 15:52:39.304495 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.304649 kubelet[2881]: E1105 15:52:39.304581 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.304968 kubelet[2881]: E1105 15:52:39.304872 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.304968 kubelet[2881]: W1105 15:52:39.304884 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.304968 kubelet[2881]: E1105 15:52:39.304897 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.305205 kubelet[2881]: E1105 15:52:39.305197 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.305250 kubelet[2881]: W1105 15:52:39.305245 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.305280 kubelet[2881]: E1105 15:52:39.305275 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.305512 kubelet[2881]: E1105 15:52:39.305444 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.305512 kubelet[2881]: W1105 15:52:39.305453 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.305512 kubelet[2881]: E1105 15:52:39.305462 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.306107 kubelet[2881]: E1105 15:52:39.306072 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.306107 kubelet[2881]: W1105 15:52:39.306102 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.306107 kubelet[2881]: E1105 15:52:39.306112 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.306337 kubelet[2881]: E1105 15:52:39.306238 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.306337 kubelet[2881]: W1105 15:52:39.306263 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.306337 kubelet[2881]: E1105 15:52:39.306269 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.306514 kubelet[2881]: E1105 15:52:39.306358 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.306514 kubelet[2881]: W1105 15:52:39.306363 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.306514 kubelet[2881]: E1105 15:52:39.306369 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.306514 kubelet[2881]: E1105 15:52:39.306433 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.306514 kubelet[2881]: W1105 15:52:39.306438 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.306514 kubelet[2881]: E1105 15:52:39.306443 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.306514 kubelet[2881]: E1105 15:52:39.306504 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.306514 kubelet[2881]: W1105 15:52:39.306508 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.306514 kubelet[2881]: E1105 15:52:39.306512 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.306691 kubelet[2881]: E1105 15:52:39.306609 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.306691 kubelet[2881]: W1105 15:52:39.306614 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.306691 kubelet[2881]: E1105 15:52:39.306619 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.323617 kubelet[2881]: E1105 15:52:39.323587 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.323617 kubelet[2881]: W1105 15:52:39.323606 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.323617 kubelet[2881]: E1105 15:52:39.323621 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.323839 kubelet[2881]: I1105 15:52:39.323657 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b030d0a4-2581-4144-a827-7d0dc3133cf3-socket-dir\") pod \"csi-node-driver-zmgb9\" (UID: \"b030d0a4-2581-4144-a827-7d0dc3133cf3\") " pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:39.324066 kubelet[2881]: E1105 15:52:39.324018 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.324066 kubelet[2881]: W1105 15:52:39.324029 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.324066 kubelet[2881]: E1105 15:52:39.324036 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.324066 kubelet[2881]: I1105 15:52:39.324049 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59hs\" (UniqueName: \"kubernetes.io/projected/b030d0a4-2581-4144-a827-7d0dc3133cf3-kube-api-access-p59hs\") pod \"csi-node-driver-zmgb9\" (UID: \"b030d0a4-2581-4144-a827-7d0dc3133cf3\") " pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:39.324433 containerd[1640]: time="2025-11-05T15:52:39.324391896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zbtz2,Uid:e4a447dd-4e16-4350-a305-c7978a14c966,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:39.324588 kubelet[2881]: E1105 15:52:39.324495 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.324588 kubelet[2881]: W1105 15:52:39.324506 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.324588 kubelet[2881]: E1105 15:52:39.324513 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.324588 kubelet[2881]: I1105 15:52:39.324544 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b030d0a4-2581-4144-a827-7d0dc3133cf3-registration-dir\") pod \"csi-node-driver-zmgb9\" (UID: \"b030d0a4-2581-4144-a827-7d0dc3133cf3\") " pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:39.325425 kubelet[2881]: E1105 15:52:39.325379 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.325425 kubelet[2881]: W1105 15:52:39.325395 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.325425 kubelet[2881]: E1105 15:52:39.325403 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.325517 kubelet[2881]: I1105 15:52:39.325430 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b030d0a4-2581-4144-a827-7d0dc3133cf3-kubelet-dir\") pod \"csi-node-driver-zmgb9\" (UID: \"b030d0a4-2581-4144-a827-7d0dc3133cf3\") " pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:39.326233 kubelet[2881]: E1105 15:52:39.326209 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.326233 kubelet[2881]: W1105 15:52:39.326226 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.326233 kubelet[2881]: E1105 15:52:39.326233 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.326427 kubelet[2881]: I1105 15:52:39.326317 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b030d0a4-2581-4144-a827-7d0dc3133cf3-varrun\") pod \"csi-node-driver-zmgb9\" (UID: \"b030d0a4-2581-4144-a827-7d0dc3133cf3\") " pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:39.326427 kubelet[2881]: E1105 15:52:39.326393 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.326427 kubelet[2881]: W1105 15:52:39.326399 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.326427 kubelet[2881]: E1105 15:52:39.326405 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.327286 kubelet[2881]: E1105 15:52:39.327263 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.327286 kubelet[2881]: W1105 15:52:39.327276 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.327286 kubelet[2881]: E1105 15:52:39.327283 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.327445 kubelet[2881]: E1105 15:52:39.327400 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.327445 kubelet[2881]: W1105 15:52:39.327405 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.327445 kubelet[2881]: E1105 15:52:39.327410 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.327622 kubelet[2881]: E1105 15:52:39.327582 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.327622 kubelet[2881]: W1105 15:52:39.327595 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.327622 kubelet[2881]: E1105 15:52:39.327601 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.328016 kubelet[2881]: E1105 15:52:39.327995 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.328016 kubelet[2881]: W1105 15:52:39.328010 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.328069 kubelet[2881]: E1105 15:52:39.328019 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.328703 kubelet[2881]: E1105 15:52:39.328679 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.328703 kubelet[2881]: W1105 15:52:39.328691 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.328703 kubelet[2881]: E1105 15:52:39.328698 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.329091 kubelet[2881]: E1105 15:52:39.329060 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.329091 kubelet[2881]: W1105 15:52:39.329075 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.329132 kubelet[2881]: E1105 15:52:39.329095 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.330366 kubelet[2881]: E1105 15:52:39.330263 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.330366 kubelet[2881]: W1105 15:52:39.330276 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.330366 kubelet[2881]: E1105 15:52:39.330282 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.330572 kubelet[2881]: E1105 15:52:39.330389 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.330572 kubelet[2881]: W1105 15:52:39.330395 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.330572 kubelet[2881]: E1105 15:52:39.330400 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.330572 kubelet[2881]: E1105 15:52:39.330490 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.330572 kubelet[2881]: W1105 15:52:39.330494 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.330572 kubelet[2881]: E1105 15:52:39.330499 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.350044 containerd[1640]: time="2025-11-05T15:52:39.349996372Z" level=info msg="connecting to shim 73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9" address="unix:///run/containerd/s/7a83a122f2b4b9ac0c43822550ef89348d7aea3b6d2194c927fb2dca7746fb36" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:39.371347 systemd[1]: Started cri-containerd-73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9.scope - libcontainer container 73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9. Nov 5 15:52:39.397021 containerd[1640]: time="2025-11-05T15:52:39.396973198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zbtz2,Uid:e4a447dd-4e16-4350-a305-c7978a14c966,Namespace:calico-system,Attempt:0,} returns sandbox id \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\"" Nov 5 15:52:39.427386 kubelet[2881]: E1105 15:52:39.427319 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.427386 kubelet[2881]: W1105 15:52:39.427339 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.427386 kubelet[2881]: E1105 15:52:39.427358 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.427841 kubelet[2881]: E1105 15:52:39.427805 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.427841 kubelet[2881]: W1105 15:52:39.427817 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.428014 kubelet[2881]: E1105 15:52:39.427938 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.428232 kubelet[2881]: E1105 15:52:39.428200 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.428232 kubelet[2881]: W1105 15:52:39.428210 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.428232 kubelet[2881]: E1105 15:52:39.428226 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.428524 kubelet[2881]: E1105 15:52:39.428503 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.428524 kubelet[2881]: W1105 15:52:39.428514 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.428524 kubelet[2881]: E1105 15:52:39.428524 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.428834 kubelet[2881]: E1105 15:52:39.428807 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.428834 kubelet[2881]: W1105 15:52:39.428817 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.428834 kubelet[2881]: E1105 15:52:39.428823 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.429119 kubelet[2881]: E1105 15:52:39.429096 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.429119 kubelet[2881]: W1105 15:52:39.429110 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.429119 kubelet[2881]: E1105 15:52:39.429119 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.429389 kubelet[2881]: E1105 15:52:39.429310 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.429389 kubelet[2881]: W1105 15:52:39.429320 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.429389 kubelet[2881]: E1105 15:52:39.429327 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.429503 kubelet[2881]: E1105 15:52:39.429495 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.429569 kubelet[2881]: W1105 15:52:39.429503 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.429569 kubelet[2881]: E1105 15:52:39.429513 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.429692 kubelet[2881]: E1105 15:52:39.429668 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.429692 kubelet[2881]: W1105 15:52:39.429673 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.429692 kubelet[2881]: E1105 15:52:39.429679 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.429970 kubelet[2881]: E1105 15:52:39.429898 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.429970 kubelet[2881]: W1105 15:52:39.429907 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.429970 kubelet[2881]: E1105 15:52:39.429913 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.430104 kubelet[2881]: E1105 15:52:39.430097 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.430216 kubelet[2881]: W1105 15:52:39.430138 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.430216 kubelet[2881]: E1105 15:52:39.430157 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.430393 kubelet[2881]: E1105 15:52:39.430364 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.430393 kubelet[2881]: W1105 15:52:39.430369 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.430393 kubelet[2881]: E1105 15:52:39.430375 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.430543 kubelet[2881]: E1105 15:52:39.430514 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.430543 kubelet[2881]: W1105 15:52:39.430524 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.430543 kubelet[2881]: E1105 15:52:39.430530 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.430693 kubelet[2881]: E1105 15:52:39.430673 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.430693 kubelet[2881]: W1105 15:52:39.430679 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.430693 kubelet[2881]: E1105 15:52:39.430686 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.431003 kubelet[2881]: E1105 15:52:39.430813 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.431003 kubelet[2881]: W1105 15:52:39.430819 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.431003 kubelet[2881]: E1105 15:52:39.430825 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.431003 kubelet[2881]: E1105 15:52:39.430986 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.431003 kubelet[2881]: W1105 15:52:39.430992 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.431003 kubelet[2881]: E1105 15:52:39.430998 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.432390 kubelet[2881]: E1105 15:52:39.432358 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.432390 kubelet[2881]: W1105 15:52:39.432370 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.432390 kubelet[2881]: E1105 15:52:39.432377 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.432541 kubelet[2881]: E1105 15:52:39.432508 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.432541 kubelet[2881]: W1105 15:52:39.432514 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.432541 kubelet[2881]: E1105 15:52:39.432519 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.432798 kubelet[2881]: E1105 15:52:39.432766 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.432798 kubelet[2881]: W1105 15:52:39.432779 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.432798 kubelet[2881]: E1105 15:52:39.432795 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.433047 kubelet[2881]: E1105 15:52:39.433017 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.433047 kubelet[2881]: W1105 15:52:39.433027 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.433047 kubelet[2881]: E1105 15:52:39.433033 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433337 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.434225 kubelet[2881]: W1105 15:52:39.433346 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433352 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433492 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.434225 kubelet[2881]: W1105 15:52:39.433496 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433502 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433598 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.434225 kubelet[2881]: W1105 15:52:39.433602 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433607 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.434225 kubelet[2881]: E1105 15:52:39.433707 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.434527 kubelet[2881]: W1105 15:52:39.433712 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.434527 kubelet[2881]: E1105 15:52:39.433716 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.434527 kubelet[2881]: E1105 15:52:39.433806 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.434527 kubelet[2881]: W1105 15:52:39.433812 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.434527 kubelet[2881]: E1105 15:52:39.433817 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:39.444213 kubelet[2881]: E1105 15:52:39.443671 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:39.444213 kubelet[2881]: W1105 15:52:39.443686 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:39.444213 kubelet[2881]: E1105 15:52:39.443695 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:40.501511 kubelet[2881]: E1105 15:52:40.501435 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:52:41.300183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451219500.mount: Deactivated successfully. Nov 5 15:52:41.755729 containerd[1640]: time="2025-11-05T15:52:41.755249320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 5 15:52:41.760829 containerd[1640]: time="2025-11-05T15:52:41.760669476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.461588385s" Nov 5 15:52:41.760829 containerd[1640]: time="2025-11-05T15:52:41.760710537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:52:41.763186 containerd[1640]: time="2025-11-05T15:52:41.762261085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:52:41.768919 containerd[1640]: time="2025-11-05T15:52:41.768891568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:41.772431 containerd[1640]: time="2025-11-05T15:52:41.772405755Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:41.772773 containerd[1640]: time="2025-11-05T15:52:41.772755636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:41.774922 containerd[1640]: time="2025-11-05T15:52:41.774899821Z" level=info msg="CreateContainer within sandbox \"338a392e687e4a6a60b9d762d4ebb19693881175b11ea5ba0d1f2a9e4871d361\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:52:41.781427 containerd[1640]: time="2025-11-05T15:52:41.781255407Z" level=info msg="Container 1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:41.799453 containerd[1640]: time="2025-11-05T15:52:41.799419162Z" level=info msg="CreateContainer within sandbox \"338a392e687e4a6a60b9d762d4ebb19693881175b11ea5ba0d1f2a9e4871d361\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c\"" Nov 5 15:52:41.799794 containerd[1640]: time="2025-11-05T15:52:41.799775183Z" level=info msg="StartContainer for \"1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c\"" Nov 5 15:52:41.800883 containerd[1640]: time="2025-11-05T15:52:41.800862187Z" level=info msg="connecting to shim 1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c" address="unix:///run/containerd/s/c42409fd13404ce21f41a7200dce406f4d2efe8750b497f3b834126c54416056" protocol=ttrpc version=3 Nov 5 15:52:41.825317 systemd[1]: Started cri-containerd-1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c.scope - libcontainer container 1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c. Nov 5 15:52:41.879633 containerd[1640]: time="2025-11-05T15:52:41.879600999Z" level=info msg="StartContainer for \"1d42525bb8cfa8447b41e994e6152a4fd516fd0682382ecf3fcb593b4ad63d5c\" returns successfully" Nov 5 15:52:42.223108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901049986.mount: Deactivated successfully. Nov 5 15:52:42.501688 kubelet[2881]: E1105 15:52:42.501160 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:52:42.626155 kubelet[2881]: I1105 15:52:42.625516 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fcff8b9-f2lx5" podStartSLOduration=2.16185272 podStartE2EDuration="4.625493839s" podCreationTimestamp="2025-11-05 15:52:38 +0000 UTC" firstStartedPulling="2025-11-05 15:52:39.298308456 +0000 UTC m=+19.948520137" lastFinishedPulling="2025-11-05 15:52:41.761949565 +0000 UTC m=+22.412161256" observedRunningTime="2025-11-05 15:52:42.624135489 +0000 UTC m=+23.274347220" watchObservedRunningTime="2025-11-05 15:52:42.625493839 +0000 UTC m=+23.275705560" Nov 5 15:52:42.629645 kubelet[2881]: E1105 15:52:42.629556 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.629645 kubelet[2881]: W1105 15:52:42.629575 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.629645 kubelet[2881]: E1105 15:52:42.629591 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.630215 kubelet[2881]: E1105 15:52:42.630040 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.630215 kubelet[2881]: W1105 15:52:42.630052 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.630215 kubelet[2881]: E1105 15:52:42.630061 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.630879 kubelet[2881]: E1105 15:52:42.630528 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.630879 kubelet[2881]: W1105 15:52:42.630539 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.630879 kubelet[2881]: E1105 15:52:42.630546 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.631797 kubelet[2881]: E1105 15:52:42.631316 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.631797 kubelet[2881]: W1105 15:52:42.631325 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.631797 kubelet[2881]: E1105 15:52:42.631335 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.631797 kubelet[2881]: E1105 15:52:42.631586 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.631797 kubelet[2881]: W1105 15:52:42.631595 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.631797 kubelet[2881]: E1105 15:52:42.631605 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.631797 kubelet[2881]: E1105 15:52:42.631802 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.632456 kubelet[2881]: W1105 15:52:42.631811 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.632456 kubelet[2881]: E1105 15:52:42.631819 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.632456 kubelet[2881]: E1105 15:52:42.632006 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.632456 kubelet[2881]: W1105 15:52:42.632027 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.632456 kubelet[2881]: E1105 15:52:42.632036 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.632456 kubelet[2881]: E1105 15:52:42.632250 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.632456 kubelet[2881]: W1105 15:52:42.632258 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.632456 kubelet[2881]: E1105 15:52:42.632268 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.632649 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634144 kubelet[2881]: W1105 15:52:42.632659 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.632668 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.632841 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634144 kubelet[2881]: W1105 15:52:42.632848 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.632856 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.633067 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634144 kubelet[2881]: W1105 15:52:42.633076 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.633084 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634144 kubelet[2881]: E1105 15:52:42.633465 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634571 kubelet[2881]: W1105 15:52:42.633473 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634571 kubelet[2881]: E1105 15:52:42.633482 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634571 kubelet[2881]: E1105 15:52:42.633697 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634571 kubelet[2881]: W1105 15:52:42.633717 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634571 kubelet[2881]: E1105 15:52:42.633725 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634571 kubelet[2881]: E1105 15:52:42.633910 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634571 kubelet[2881]: W1105 15:52:42.633918 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634571 kubelet[2881]: E1105 15:52:42.633926 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.634571 kubelet[2881]: E1105 15:52:42.634260 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.634571 kubelet[2881]: W1105 15:52:42.634268 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.634783 kubelet[2881]: E1105 15:52:42.634277 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.657391 kubelet[2881]: E1105 15:52:42.657284 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.657391 kubelet[2881]: W1105 15:52:42.657335 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.657391 kubelet[2881]: E1105 15:52:42.657358 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.659539 kubelet[2881]: E1105 15:52:42.659462 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.659539 kubelet[2881]: W1105 15:52:42.659495 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.659539 kubelet[2881]: E1105 15:52:42.659515 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.660295 kubelet[2881]: E1105 15:52:42.660252 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.660507 kubelet[2881]: W1105 15:52:42.660448 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.660507 kubelet[2881]: E1105 15:52:42.660483 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.660824 kubelet[2881]: E1105 15:52:42.660796 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.660824 kubelet[2881]: W1105 15:52:42.660816 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.660932 kubelet[2881]: E1105 15:52:42.660830 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.661103 kubelet[2881]: E1105 15:52:42.661079 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.661103 kubelet[2881]: W1105 15:52:42.661097 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.661206 kubelet[2881]: E1105 15:52:42.661109 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.661409 kubelet[2881]: E1105 15:52:42.661382 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.661921 kubelet[2881]: W1105 15:52:42.661407 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.661921 kubelet[2881]: E1105 15:52:42.661427 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.661921 kubelet[2881]: E1105 15:52:42.661691 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.661921 kubelet[2881]: W1105 15:52:42.661703 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.661921 kubelet[2881]: E1105 15:52:42.661714 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.662264 kubelet[2881]: E1105 15:52:42.662234 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.662264 kubelet[2881]: W1105 15:52:42.662256 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.662354 kubelet[2881]: E1105 15:52:42.662271 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.662689 kubelet[2881]: E1105 15:52:42.662660 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.662689 kubelet[2881]: W1105 15:52:42.662679 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.662841 kubelet[2881]: E1105 15:52:42.662692 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.663076 kubelet[2881]: E1105 15:52:42.663059 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.663076 kubelet[2881]: W1105 15:52:42.663074 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.663392 kubelet[2881]: E1105 15:52:42.663086 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.663526 kubelet[2881]: E1105 15:52:42.663471 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.663526 kubelet[2881]: W1105 15:52:42.663496 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.663526 kubelet[2881]: E1105 15:52:42.663514 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.663881 kubelet[2881]: E1105 15:52:42.663846 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.663881 kubelet[2881]: W1105 15:52:42.663870 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.664094 kubelet[2881]: E1105 15:52:42.663886 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.664280 kubelet[2881]: E1105 15:52:42.664261 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.664396 kubelet[2881]: W1105 15:52:42.664355 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.664396 kubelet[2881]: E1105 15:52:42.664386 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.664785 kubelet[2881]: E1105 15:52:42.664751 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.664785 kubelet[2881]: W1105 15:52:42.664773 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.664890 kubelet[2881]: E1105 15:52:42.664789 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.665249 kubelet[2881]: E1105 15:52:42.665113 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.665249 kubelet[2881]: W1105 15:52:42.665131 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.665249 kubelet[2881]: E1105 15:52:42.665145 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.665724 kubelet[2881]: E1105 15:52:42.665687 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.665724 kubelet[2881]: W1105 15:52:42.665708 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.665724 kubelet[2881]: E1105 15:52:42.665724 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.666282 kubelet[2881]: E1105 15:52:42.666238 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.666282 kubelet[2881]: W1105 15:52:42.666254 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.666282 kubelet[2881]: E1105 15:52:42.666266 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:42.666522 kubelet[2881]: E1105 15:52:42.666467 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:42.666522 kubelet[2881]: W1105 15:52:42.666477 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:42.666522 kubelet[2881]: E1105 15:52:42.666488 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.611259 kubelet[2881]: I1105 15:52:43.611200 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:52:43.618846 containerd[1640]: time="2025-11-05T15:52:43.618790056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:43.620420 containerd[1640]: time="2025-11-05T15:52:43.620386913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 5 15:52:43.621260 containerd[1640]: time="2025-11-05T15:52:43.620825216Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:43.632242 containerd[1640]: time="2025-11-05T15:52:43.631387614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:43.632814 containerd[1640]: time="2025-11-05T15:52:43.632775095Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.870483319s" Nov 5 15:52:43.632933 containerd[1640]: time="2025-11-05T15:52:43.632914269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:52:43.638843 containerd[1640]: time="2025-11-05T15:52:43.638790330Z" level=info msg="CreateContainer within sandbox \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:52:43.642358 kubelet[2881]: E1105 15:52:43.641305 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.642358 kubelet[2881]: W1105 15:52:43.641330 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.642358 kubelet[2881]: E1105 15:52:43.641353 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.642358 kubelet[2881]: E1105 15:52:43.641580 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.642358 kubelet[2881]: W1105 15:52:43.641600 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.642358 kubelet[2881]: E1105 15:52:43.641621 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.642358 kubelet[2881]: E1105 15:52:43.641915 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.642358 kubelet[2881]: W1105 15:52:43.641934 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.642358 kubelet[2881]: E1105 15:52:43.641955 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.642688 kubelet[2881]: E1105 15:52:43.642381 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.642688 kubelet[2881]: W1105 15:52:43.642401 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.642688 kubelet[2881]: E1105 15:52:43.642421 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.642764 kubelet[2881]: E1105 15:52:43.642690 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.642764 kubelet[2881]: W1105 15:52:43.642701 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.642764 kubelet[2881]: E1105 15:52:43.642711 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.642909 kubelet[2881]: E1105 15:52:43.642891 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.642909 kubelet[2881]: W1105 15:52:43.642904 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.642916 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.643079 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.644575 kubelet[2881]: W1105 15:52:43.643089 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.643099 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.643345 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.644575 kubelet[2881]: W1105 15:52:43.643360 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.643373 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.643601 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.644575 kubelet[2881]: W1105 15:52:43.643615 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644575 kubelet[2881]: E1105 15:52:43.643633 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.643848 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.644952 kubelet[2881]: W1105 15:52:43.643862 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.643876 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.644079 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.644952 kubelet[2881]: W1105 15:52:43.644093 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.644108 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.644379 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.644952 kubelet[2881]: W1105 15:52:43.644392 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.644405 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.644952 kubelet[2881]: E1105 15:52:43.644677 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.645323 kubelet[2881]: W1105 15:52:43.644693 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.645323 kubelet[2881]: E1105 15:52:43.644709 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.645323 kubelet[2881]: E1105 15:52:43.644929 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.645323 kubelet[2881]: W1105 15:52:43.644944 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.645323 kubelet[2881]: E1105 15:52:43.644958 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.645323 kubelet[2881]: E1105 15:52:43.645211 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.645323 kubelet[2881]: W1105 15:52:43.645243 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.645323 kubelet[2881]: E1105 15:52:43.645258 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.668900 kubelet[2881]: E1105 15:52:43.668840 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.668900 kubelet[2881]: W1105 15:52:43.668878 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.668900 kubelet[2881]: E1105 15:52:43.668907 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.672604 kubelet[2881]: E1105 15:52:43.672392 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.672604 kubelet[2881]: W1105 15:52:43.672505 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.672604 kubelet[2881]: E1105 15:52:43.672525 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.675062 kubelet[2881]: E1105 15:52:43.674620 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.675062 kubelet[2881]: W1105 15:52:43.674641 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.675062 kubelet[2881]: E1105 15:52:43.674661 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.676315 kubelet[2881]: E1105 15:52:43.674927 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.676315 kubelet[2881]: W1105 15:52:43.675579 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.676315 kubelet[2881]: E1105 15:52:43.675595 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.677733 kubelet[2881]: E1105 15:52:43.677642 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.677733 kubelet[2881]: W1105 15:52:43.677660 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.677733 kubelet[2881]: E1105 15:52:43.677674 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.678361 kubelet[2881]: E1105 15:52:43.678123 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.678361 kubelet[2881]: W1105 15:52:43.678159 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.678361 kubelet[2881]: E1105 15:52:43.678187 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.679392 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.681894 kubelet[2881]: W1105 15:52:43.679408 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.679422 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.681142 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.681894 kubelet[2881]: W1105 15:52:43.681155 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.681194 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.681416 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.681894 kubelet[2881]: W1105 15:52:43.681425 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.681435 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.681894 kubelet[2881]: E1105 15:52:43.681614 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.682299 kubelet[2881]: W1105 15:52:43.681622 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.682299 kubelet[2881]: E1105 15:52:43.681633 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.682299 kubelet[2881]: E1105 15:52:43.681860 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.682299 kubelet[2881]: W1105 15:52:43.681870 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.682299 kubelet[2881]: E1105 15:52:43.681880 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.683107 kubelet[2881]: E1105 15:52:43.682658 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.683107 kubelet[2881]: W1105 15:52:43.682671 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.683107 kubelet[2881]: E1105 15:52:43.682682 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.684609 kubelet[2881]: E1105 15:52:43.684502 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.684609 kubelet[2881]: W1105 15:52:43.684531 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.684609 kubelet[2881]: E1105 15:52:43.684545 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.685367 kubelet[2881]: E1105 15:52:43.685198 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.685367 kubelet[2881]: W1105 15:52:43.685233 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.685367 kubelet[2881]: E1105 15:52:43.685249 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.685738 kubelet[2881]: E1105 15:52:43.685725 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.685853 kubelet[2881]: W1105 15:52:43.685799 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.685853 kubelet[2881]: E1105 15:52:43.685813 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.686439 kubelet[2881]: E1105 15:52:43.686338 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.686439 kubelet[2881]: W1105 15:52:43.686350 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.686439 kubelet[2881]: E1105 15:52:43.686362 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.686687 kubelet[2881]: E1105 15:52:43.686648 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.686687 kubelet[2881]: W1105 15:52:43.686663 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.686687 kubelet[2881]: E1105 15:52:43.686674 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.687030 kubelet[2881]: E1105 15:52:43.686987 2881 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:52:43.687030 kubelet[2881]: W1105 15:52:43.687000 2881 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:52:43.687030 kubelet[2881]: E1105 15:52:43.687011 2881 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:52:43.698360 containerd[1640]: time="2025-11-05T15:52:43.698314678Z" level=info msg="Container 6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:43.712211 containerd[1640]: time="2025-11-05T15:52:43.710981187Z" level=info msg="CreateContainer within sandbox \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\"" Nov 5 15:52:43.713949 containerd[1640]: time="2025-11-05T15:52:43.713750119Z" level=info msg="StartContainer for \"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\"" Nov 5 15:52:43.717521 containerd[1640]: time="2025-11-05T15:52:43.717371324Z" level=info msg="connecting to shim 6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30" address="unix:///run/containerd/s/7a83a122f2b4b9ac0c43822550ef89348d7aea3b6d2194c927fb2dca7746fb36" protocol=ttrpc version=3 Nov 5 15:52:43.741528 systemd[1]: Started cri-containerd-6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30.scope - libcontainer container 6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30. Nov 5 15:52:43.797730 containerd[1640]: time="2025-11-05T15:52:43.797651777Z" level=info msg="StartContainer for \"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\" returns successfully" Nov 5 15:52:43.813013 systemd[1]: cri-containerd-6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30.scope: Deactivated successfully. Nov 5 15:52:43.821018 containerd[1640]: time="2025-11-05T15:52:43.820980278Z" level=info msg="received exit event container_id:\"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\" id:\"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\" pid:3587 exited_at:{seconds:1762357963 nanos:816035054}" Nov 5 15:52:43.831200 containerd[1640]: time="2025-11-05T15:52:43.831062042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\" id:\"6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30\" pid:3587 exited_at:{seconds:1762357963 nanos:816035054}" Nov 5 15:52:43.846717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b449466b17cb25ea12040dfa6bdbe2b5e4d148fda3bf18053fe9278a440cc30-rootfs.mount: Deactivated successfully. Nov 5 15:52:44.501322 kubelet[2881]: E1105 15:52:44.501220 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:52:44.619191 containerd[1640]: time="2025-11-05T15:52:44.617507698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:52:46.504080 kubelet[2881]: E1105 15:52:46.501924 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:52:47.540740 containerd[1640]: time="2025-11-05T15:52:47.540663024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:47.541985 containerd[1640]: time="2025-11-05T15:52:47.541777814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:52:47.542715 containerd[1640]: time="2025-11-05T15:52:47.542688788Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:47.544596 containerd[1640]: time="2025-11-05T15:52:47.544557689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:47.545434 containerd[1640]: time="2025-11-05T15:52:47.545400221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.927850222s" Nov 5 15:52:47.545527 containerd[1640]: time="2025-11-05T15:52:47.545511624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:52:47.556962 containerd[1640]: time="2025-11-05T15:52:47.556557651Z" level=info msg="CreateContainer within sandbox \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:52:47.568591 containerd[1640]: time="2025-11-05T15:52:47.567320639Z" level=info msg="Container 7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:47.578773 containerd[1640]: time="2025-11-05T15:52:47.578730995Z" level=info msg="CreateContainer within sandbox \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\"" Nov 5 15:52:47.579642 containerd[1640]: time="2025-11-05T15:52:47.579618859Z" level=info msg="StartContainer for \"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\"" Nov 5 15:52:47.581400 containerd[1640]: time="2025-11-05T15:52:47.581359145Z" level=info msg="connecting to shim 7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39" address="unix:///run/containerd/s/7a83a122f2b4b9ac0c43822550ef89348d7aea3b6d2194c927fb2dca7746fb36" protocol=ttrpc version=3 Nov 5 15:52:47.609386 systemd[1]: Started cri-containerd-7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39.scope - libcontainer container 7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39. Nov 5 15:52:47.663442 containerd[1640]: time="2025-11-05T15:52:47.663351844Z" level=info msg="StartContainer for \"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\" returns successfully" Nov 5 15:52:47.953013 kubelet[2881]: I1105 15:52:47.952939 2881 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 15:52:48.145151 systemd[1]: cri-containerd-7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39.scope: Deactivated successfully. Nov 5 15:52:48.146564 systemd[1]: cri-containerd-7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39.scope: Consumed 370ms CPU time, 166.8M memory peak, 4.4M read from disk, 171.3M written to disk. Nov 5 15:52:48.199968 containerd[1640]: time="2025-11-05T15:52:48.196378593Z" level=info msg="received exit event container_id:\"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\" id:\"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\" pid:3647 exited_at:{seconds:1762357968 nanos:195871839}" Nov 5 15:52:48.199968 containerd[1640]: time="2025-11-05T15:52:48.198254352Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\" id:\"7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39\" pid:3647 exited_at:{seconds:1762357968 nanos:195871839}" Nov 5 15:52:48.207252 kubelet[2881]: I1105 15:52:48.207144 2881 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:52:48.260070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7438029b394a68c60747aeeeb9ef94e8ea1a15e81049542c380dbbceda706c39-rootfs.mount: Deactivated successfully. Nov 5 15:52:48.292425 systemd[1]: Created slice kubepods-burstable-podb0705c42_1f08_4eb9_bf1f_7320280b9c42.slice - libcontainer container kubepods-burstable-podb0705c42_1f08_4eb9_bf1f_7320280b9c42.slice. Nov 5 15:52:48.302504 systemd[1]: Created slice kubepods-burstable-poda36eb2db_7320_42f2_91ec_0f462f8e66f5.slice - libcontainer container kubepods-burstable-poda36eb2db_7320_42f2_91ec_0f462f8e66f5.slice. Nov 5 15:52:48.307231 kubelet[2881]: I1105 15:52:48.307206 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed0296fa-6511-43dd-8149-4b0a2a3f1b39-calico-apiserver-certs\") pod \"calico-apiserver-6c6845c454-lq899\" (UID: \"ed0296fa-6511-43dd-8149-4b0a2a3f1b39\") " pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" Nov 5 15:52:48.310836 kubelet[2881]: I1105 15:52:48.310681 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a36eb2db-7320-42f2-91ec-0f462f8e66f5-config-volume\") pod \"coredns-674b8bbfcf-gtx6k\" (UID: \"a36eb2db-7320-42f2-91ec-0f462f8e66f5\") " pod="kube-system/coredns-674b8bbfcf-gtx6k" Nov 5 15:52:48.310726 systemd[1]: Created slice kubepods-besteffort-poded0296fa_6511_43dd_8149_4b0a2a3f1b39.slice - libcontainer container kubepods-besteffort-poded0296fa_6511_43dd_8149_4b0a2a3f1b39.slice. Nov 5 15:52:48.310995 kubelet[2881]: I1105 15:52:48.310983 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw5dx\" (UniqueName: \"kubernetes.io/projected/cac08146-1ce4-4e81-8229-9518d20f8fa6-kube-api-access-cw5dx\") pod \"calico-apiserver-6c6845c454-2db4p\" (UID: \"cac08146-1ce4-4e81-8229-9518d20f8fa6\") " pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" Nov 5 15:52:48.312239 kubelet[2881]: I1105 15:52:48.312213 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0705c42-1f08-4eb9-bf1f-7320280b9c42-config-volume\") pod \"coredns-674b8bbfcf-vpxj6\" (UID: \"b0705c42-1f08-4eb9-bf1f-7320280b9c42\") " pod="kube-system/coredns-674b8bbfcf-vpxj6" Nov 5 15:52:48.312735 kubelet[2881]: I1105 15:52:48.312725 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjtrb\" (UniqueName: \"kubernetes.io/projected/b0705c42-1f08-4eb9-bf1f-7320280b9c42-kube-api-access-kjtrb\") pod \"coredns-674b8bbfcf-vpxj6\" (UID: \"b0705c42-1f08-4eb9-bf1f-7320280b9c42\") " pod="kube-system/coredns-674b8bbfcf-vpxj6" Nov 5 15:52:48.314223 kubelet[2881]: I1105 15:52:48.314142 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cac08146-1ce4-4e81-8229-9518d20f8fa6-calico-apiserver-certs\") pod \"calico-apiserver-6c6845c454-2db4p\" (UID: \"cac08146-1ce4-4e81-8229-9518d20f8fa6\") " pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" Nov 5 15:52:48.317194 kubelet[2881]: I1105 15:52:48.314510 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrd9n\" (UniqueName: \"kubernetes.io/projected/a36eb2db-7320-42f2-91ec-0f462f8e66f5-kube-api-access-nrd9n\") pod \"coredns-674b8bbfcf-gtx6k\" (UID: \"a36eb2db-7320-42f2-91ec-0f462f8e66f5\") " pod="kube-system/coredns-674b8bbfcf-gtx6k" Nov 5 15:52:48.317194 kubelet[2881]: I1105 15:52:48.314529 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m87n\" (UniqueName: \"kubernetes.io/projected/ed0296fa-6511-43dd-8149-4b0a2a3f1b39-kube-api-access-8m87n\") pod \"calico-apiserver-6c6845c454-lq899\" (UID: \"ed0296fa-6511-43dd-8149-4b0a2a3f1b39\") " pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" Nov 5 15:52:48.318075 systemd[1]: Created slice kubepods-besteffort-pod53b3cb5e_bf4b_46ee_9b8d_ae19375a2db6.slice - libcontainer container kubepods-besteffort-pod53b3cb5e_bf4b_46ee_9b8d_ae19375a2db6.slice. Nov 5 15:52:48.323291 systemd[1]: Created slice kubepods-besteffort-pode18327f2_d68d_49ab_a654_9572aaec9d64.slice - libcontainer container kubepods-besteffort-pode18327f2_d68d_49ab_a654_9572aaec9d64.slice. Nov 5 15:52:48.329330 systemd[1]: Created slice kubepods-besteffort-pod882e95b9_8bc5_46aa_95f2_d288e79d54ed.slice - libcontainer container kubepods-besteffort-pod882e95b9_8bc5_46aa_95f2_d288e79d54ed.slice. Nov 5 15:52:48.333621 systemd[1]: Created slice kubepods-besteffort-podcac08146_1ce4_4e81_8229_9518d20f8fa6.slice - libcontainer container kubepods-besteffort-podcac08146_1ce4_4e81_8229_9518d20f8fa6.slice. Nov 5 15:52:48.415000 kubelet[2881]: I1105 15:52:48.414880 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmt4\" (UniqueName: \"kubernetes.io/projected/53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6-kube-api-access-fxmt4\") pod \"calico-kube-controllers-78c8bcf4d4-sxk8m\" (UID: \"53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6\") " pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" Nov 5 15:52:48.415338 kubelet[2881]: I1105 15:52:48.415108 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-backend-key-pair\") pod \"whisker-85b966c974-v27qv\" (UID: \"e18327f2-d68d-49ab-a654-9572aaec9d64\") " pod="calico-system/whisker-85b966c974-v27qv" Nov 5 15:52:48.415338 kubelet[2881]: I1105 15:52:48.415393 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-ca-bundle\") pod \"whisker-85b966c974-v27qv\" (UID: \"e18327f2-d68d-49ab-a654-9572aaec9d64\") " pod="calico-system/whisker-85b966c974-v27qv" Nov 5 15:52:48.417068 kubelet[2881]: I1105 15:52:48.415421 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltrhf\" (UniqueName: \"kubernetes.io/projected/e18327f2-d68d-49ab-a654-9572aaec9d64-kube-api-access-ltrhf\") pod \"whisker-85b966c974-v27qv\" (UID: \"e18327f2-d68d-49ab-a654-9572aaec9d64\") " pod="calico-system/whisker-85b966c974-v27qv" Nov 5 15:52:48.417068 kubelet[2881]: I1105 15:52:48.415682 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882e95b9-8bc5-46aa-95f2-d288e79d54ed-config\") pod \"goldmane-666569f655-lp6gv\" (UID: \"882e95b9-8bc5-46aa-95f2-d288e79d54ed\") " pod="calico-system/goldmane-666569f655-lp6gv" Nov 5 15:52:48.417068 kubelet[2881]: I1105 15:52:48.415705 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5ql9\" (UniqueName: \"kubernetes.io/projected/882e95b9-8bc5-46aa-95f2-d288e79d54ed-kube-api-access-l5ql9\") pod \"goldmane-666569f655-lp6gv\" (UID: \"882e95b9-8bc5-46aa-95f2-d288e79d54ed\") " pod="calico-system/goldmane-666569f655-lp6gv" Nov 5 15:52:48.417068 kubelet[2881]: I1105 15:52:48.415728 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e95b9-8bc5-46aa-95f2-d288e79d54ed-goldmane-ca-bundle\") pod \"goldmane-666569f655-lp6gv\" (UID: \"882e95b9-8bc5-46aa-95f2-d288e79d54ed\") " pod="calico-system/goldmane-666569f655-lp6gv" Nov 5 15:52:48.417068 kubelet[2881]: I1105 15:52:48.415744 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6-tigera-ca-bundle\") pod \"calico-kube-controllers-78c8bcf4d4-sxk8m\" (UID: \"53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6\") " pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" Nov 5 15:52:48.417724 kubelet[2881]: I1105 15:52:48.415785 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/882e95b9-8bc5-46aa-95f2-d288e79d54ed-goldmane-key-pair\") pod \"goldmane-666569f655-lp6gv\" (UID: \"882e95b9-8bc5-46aa-95f2-d288e79d54ed\") " pod="calico-system/goldmane-666569f655-lp6gv" Nov 5 15:52:48.506542 systemd[1]: Created slice kubepods-besteffort-podb030d0a4_2581_4144_a827_7d0dc3133cf3.slice - libcontainer container kubepods-besteffort-podb030d0a4_2581_4144_a827_7d0dc3133cf3.slice. Nov 5 15:52:48.515732 containerd[1640]: time="2025-11-05T15:52:48.515675926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmgb9,Uid:b030d0a4-2581-4144-a827-7d0dc3133cf3,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:48.602389 containerd[1640]: time="2025-11-05T15:52:48.602325287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpxj6,Uid:b0705c42-1f08-4eb9-bf1f-7320280b9c42,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:48.622499 containerd[1640]: time="2025-11-05T15:52:48.622456147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-lq899,Uid:ed0296fa-6511-43dd-8149-4b0a2a3f1b39,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:52:48.622911 containerd[1640]: time="2025-11-05T15:52:48.622461927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtx6k,Uid:a36eb2db-7320-42f2-91ec-0f462f8e66f5,Namespace:kube-system,Attempt:0,}" Nov 5 15:52:48.625346 containerd[1640]: time="2025-11-05T15:52:48.625158778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c8bcf4d4-sxk8m,Uid:53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:48.629691 containerd[1640]: time="2025-11-05T15:52:48.629651466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b966c974-v27qv,Uid:e18327f2-d68d-49ab-a654-9572aaec9d64,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:48.633511 containerd[1640]: time="2025-11-05T15:52:48.633465636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lp6gv,Uid:882e95b9-8bc5-46aa-95f2-d288e79d54ed,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:48.637612 containerd[1640]: time="2025-11-05T15:52:48.637592555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-2db4p,Uid:cac08146-1ce4-4e81-8229-9518d20f8fa6,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:52:48.703977 containerd[1640]: time="2025-11-05T15:52:48.703640813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:52:48.852296 containerd[1640]: time="2025-11-05T15:52:48.852240784Z" level=error msg="Failed to destroy network for sandbox \"4efb6215d5f7eb56405f75ba2e3918a226b373c951c7548dc9bdb4fd17ca8223\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.855497 containerd[1640]: time="2025-11-05T15:52:48.855257114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpxj6,Uid:b0705c42-1f08-4eb9-bf1f-7320280b9c42,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efb6215d5f7eb56405f75ba2e3918a226b373c951c7548dc9bdb4fd17ca8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.857601 kubelet[2881]: E1105 15:52:48.857195 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efb6215d5f7eb56405f75ba2e3918a226b373c951c7548dc9bdb4fd17ca8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.857601 kubelet[2881]: E1105 15:52:48.857319 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efb6215d5f7eb56405f75ba2e3918a226b373c951c7548dc9bdb4fd17ca8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vpxj6" Nov 5 15:52:48.857601 kubelet[2881]: E1105 15:52:48.857338 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4efb6215d5f7eb56405f75ba2e3918a226b373c951c7548dc9bdb4fd17ca8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vpxj6" Nov 5 15:52:48.857783 kubelet[2881]: E1105 15:52:48.857421 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vpxj6_kube-system(b0705c42-1f08-4eb9-bf1f-7320280b9c42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vpxj6_kube-system(b0705c42-1f08-4eb9-bf1f-7320280b9c42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4efb6215d5f7eb56405f75ba2e3918a226b373c951c7548dc9bdb4fd17ca8223\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vpxj6" podUID="b0705c42-1f08-4eb9-bf1f-7320280b9c42" Nov 5 15:52:48.875773 containerd[1640]: time="2025-11-05T15:52:48.875669000Z" level=error msg="Failed to destroy network for sandbox \"fb42eb657a206ab1ef15a96c7c15f3fad39b6d37929d296eab9e1438e20ddb18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.880155 containerd[1640]: time="2025-11-05T15:52:48.879647815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c8bcf4d4-sxk8m,Uid:53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42eb657a206ab1ef15a96c7c15f3fad39b6d37929d296eab9e1438e20ddb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.880543 kubelet[2881]: E1105 15:52:48.879857 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42eb657a206ab1ef15a96c7c15f3fad39b6d37929d296eab9e1438e20ddb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.880543 kubelet[2881]: E1105 15:52:48.879909 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42eb657a206ab1ef15a96c7c15f3fad39b6d37929d296eab9e1438e20ddb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" Nov 5 15:52:48.880543 kubelet[2881]: E1105 15:52:48.879926 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb42eb657a206ab1ef15a96c7c15f3fad39b6d37929d296eab9e1438e20ddb18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" Nov 5 15:52:48.880667 kubelet[2881]: E1105 15:52:48.879966 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78c8bcf4d4-sxk8m_calico-system(53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78c8bcf4d4-sxk8m_calico-system(53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb42eb657a206ab1ef15a96c7c15f3fad39b6d37929d296eab9e1438e20ddb18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:52:48.887406 containerd[1640]: time="2025-11-05T15:52:48.887316357Z" level=error msg="Failed to destroy network for sandbox \"882776817fbf248bd80d7400f983b544a02e38428b0040a4efed64b3db86ec91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.890691 containerd[1640]: time="2025-11-05T15:52:48.890657335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtx6k,Uid:a36eb2db-7320-42f2-91ec-0f462f8e66f5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"882776817fbf248bd80d7400f983b544a02e38428b0040a4efed64b3db86ec91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.891234 containerd[1640]: time="2025-11-05T15:52:48.891104067Z" level=error msg="Failed to destroy network for sandbox \"63282d2d506145d45df80a4793e56dd5712a5c13d0d0aeb7a0d466878784d4c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.891278 kubelet[2881]: E1105 15:52:48.891033 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"882776817fbf248bd80d7400f983b544a02e38428b0040a4efed64b3db86ec91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.891385 kubelet[2881]: E1105 15:52:48.891326 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"882776817fbf248bd80d7400f983b544a02e38428b0040a4efed64b3db86ec91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gtx6k" Nov 5 15:52:48.891385 kubelet[2881]: E1105 15:52:48.891347 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"882776817fbf248bd80d7400f983b544a02e38428b0040a4efed64b3db86ec91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gtx6k" Nov 5 15:52:48.891568 kubelet[2881]: E1105 15:52:48.891549 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gtx6k_kube-system(a36eb2db-7320-42f2-91ec-0f462f8e66f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gtx6k_kube-system(a36eb2db-7320-42f2-91ec-0f462f8e66f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"882776817fbf248bd80d7400f983b544a02e38428b0040a4efed64b3db86ec91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gtx6k" podUID="a36eb2db-7320-42f2-91ec-0f462f8e66f5" Nov 5 15:52:48.893596 containerd[1640]: time="2025-11-05T15:52:48.893151951Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b966c974-v27qv,Uid:e18327f2-d68d-49ab-a654-9572aaec9d64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63282d2d506145d45df80a4793e56dd5712a5c13d0d0aeb7a0d466878784d4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.894122 kubelet[2881]: E1105 15:52:48.893790 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63282d2d506145d45df80a4793e56dd5712a5c13d0d0aeb7a0d466878784d4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.894122 kubelet[2881]: E1105 15:52:48.893812 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63282d2d506145d45df80a4793e56dd5712a5c13d0d0aeb7a0d466878784d4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85b966c974-v27qv" Nov 5 15:52:48.894122 kubelet[2881]: E1105 15:52:48.893823 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63282d2d506145d45df80a4793e56dd5712a5c13d0d0aeb7a0d466878784d4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-85b966c974-v27qv" Nov 5 15:52:48.894276 kubelet[2881]: E1105 15:52:48.893847 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-85b966c974-v27qv_calico-system(e18327f2-d68d-49ab-a654-9572aaec9d64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-85b966c974-v27qv_calico-system(e18327f2-d68d-49ab-a654-9572aaec9d64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63282d2d506145d45df80a4793e56dd5712a5c13d0d0aeb7a0d466878784d4c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-85b966c974-v27qv" podUID="e18327f2-d68d-49ab-a654-9572aaec9d64" Nov 5 15:52:48.905925 containerd[1640]: time="2025-11-05T15:52:48.905818234Z" level=error msg="Failed to destroy network for sandbox \"52b70bf57eea317cc567d831ca5fff1b70dc7d7b2d46590d302145f7d1d3881c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.906009 containerd[1640]: time="2025-11-05T15:52:48.905929577Z" level=error msg="Failed to destroy network for sandbox \"ca15a6913775dc2995e09aec3a6457a3d869472e1c4521fc49dcbba39ca1ae83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.906237 containerd[1640]: time="2025-11-05T15:52:48.906050000Z" level=error msg="Failed to destroy network for sandbox \"53661e14866009bedc2307e529326b018515540b38f142e8d95755af32d3f581\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.906971 containerd[1640]: time="2025-11-05T15:52:48.906519502Z" level=error msg="Failed to destroy network for sandbox \"63e12b81d5eee7e8aa3318eb73f369d8eafb28f617c55de34425c33248df9f16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.907621 containerd[1640]: time="2025-11-05T15:52:48.907593331Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmgb9,Uid:b030d0a4-2581-4144-a827-7d0dc3133cf3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b70bf57eea317cc567d831ca5fff1b70dc7d7b2d46590d302145f7d1d3881c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.907621 containerd[1640]: time="2025-11-05T15:52:48.908595617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-2db4p,Uid:cac08146-1ce4-4e81-8229-9518d20f8fa6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"53661e14866009bedc2307e529326b018515540b38f142e8d95755af32d3f581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.908741 kubelet[2881]: E1105 15:52:48.908048 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b70bf57eea317cc567d831ca5fff1b70dc7d7b2d46590d302145f7d1d3881c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.908741 kubelet[2881]: E1105 15:52:48.908104 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b70bf57eea317cc567d831ca5fff1b70dc7d7b2d46590d302145f7d1d3881c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:48.908741 kubelet[2881]: E1105 15:52:48.908121 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b70bf57eea317cc567d831ca5fff1b70dc7d7b2d46590d302145f7d1d3881c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zmgb9" Nov 5 15:52:48.908799 kubelet[2881]: E1105 15:52:48.908574 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52b70bf57eea317cc567d831ca5fff1b70dc7d7b2d46590d302145f7d1d3881c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:52:48.910258 kubelet[2881]: E1105 15:52:48.909389 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53661e14866009bedc2307e529326b018515540b38f142e8d95755af32d3f581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.910258 kubelet[2881]: E1105 15:52:48.909505 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53661e14866009bedc2307e529326b018515540b38f142e8d95755af32d3f581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" Nov 5 15:52:48.910258 kubelet[2881]: E1105 15:52:48.909553 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53661e14866009bedc2307e529326b018515540b38f142e8d95755af32d3f581\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" Nov 5 15:52:48.910394 containerd[1640]: time="2025-11-05T15:52:48.909642534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lp6gv,Uid:882e95b9-8bc5-46aa-95f2-d288e79d54ed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e12b81d5eee7e8aa3318eb73f369d8eafb28f617c55de34425c33248df9f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.910434 kubelet[2881]: E1105 15:52:48.909718 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6845c454-2db4p_calico-apiserver(cac08146-1ce4-4e81-8229-9518d20f8fa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6845c454-2db4p_calico-apiserver(cac08146-1ce4-4e81-8229-9518d20f8fa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53661e14866009bedc2307e529326b018515540b38f142e8d95755af32d3f581\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:52:48.910434 kubelet[2881]: E1105 15:52:48.910063 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e12b81d5eee7e8aa3318eb73f369d8eafb28f617c55de34425c33248df9f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.910434 kubelet[2881]: E1105 15:52:48.910132 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e12b81d5eee7e8aa3318eb73f369d8eafb28f617c55de34425c33248df9f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lp6gv" Nov 5 15:52:48.910492 kubelet[2881]: E1105 15:52:48.910259 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63e12b81d5eee7e8aa3318eb73f369d8eafb28f617c55de34425c33248df9f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-lp6gv" Nov 5 15:52:48.910492 kubelet[2881]: E1105 15:52:48.910430 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-lp6gv_calico-system(882e95b9-8bc5-46aa-95f2-d288e79d54ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-lp6gv_calico-system(882e95b9-8bc5-46aa-95f2-d288e79d54ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63e12b81d5eee7e8aa3318eb73f369d8eafb28f617c55de34425c33248df9f16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:52:48.911260 kubelet[2881]: E1105 15:52:48.910762 2881 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca15a6913775dc2995e09aec3a6457a3d869472e1c4521fc49dcbba39ca1ae83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.911260 kubelet[2881]: E1105 15:52:48.910788 2881 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca15a6913775dc2995e09aec3a6457a3d869472e1c4521fc49dcbba39ca1ae83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" Nov 5 15:52:48.911260 kubelet[2881]: E1105 15:52:48.910805 2881 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca15a6913775dc2995e09aec3a6457a3d869472e1c4521fc49dcbba39ca1ae83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" Nov 5 15:52:48.911329 containerd[1640]: time="2025-11-05T15:52:48.910579109Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-lq899,Uid:ed0296fa-6511-43dd-8149-4b0a2a3f1b39,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca15a6913775dc2995e09aec3a6457a3d869472e1c4521fc49dcbba39ca1ae83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:52:48.911373 kubelet[2881]: E1105 15:52:48.910883 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6845c454-lq899_calico-apiserver(ed0296fa-6511-43dd-8149-4b0a2a3f1b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6845c454-lq899_calico-apiserver(ed0296fa-6511-43dd-8149-4b0a2a3f1b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca15a6913775dc2995e09aec3a6457a3d869472e1c4521fc49dcbba39ca1ae83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:52:49.567628 systemd[1]: run-netns-cni\x2dc99ed093\x2d34ce\x2d88b2\x2d0bb0\x2d84b10e43fa6a.mount: Deactivated successfully. Nov 5 15:52:49.567728 systemd[1]: run-netns-cni\x2d7bd007d4\x2d97c1\x2dfdc4\x2d4f5b\x2d8cdc16a902f5.mount: Deactivated successfully. Nov 5 15:52:49.567785 systemd[1]: run-netns-cni\x2db40b685a\x2d5fc8\x2ddd81\x2da0e5\x2da042e1a68dc3.mount: Deactivated successfully. Nov 5 15:52:49.567835 systemd[1]: run-netns-cni\x2dcc0ab757\x2d97b3\x2da564\x2d71f2\x2da3366646a8fb.mount: Deactivated successfully. Nov 5 15:52:49.567885 systemd[1]: run-netns-cni\x2d74092833\x2def51\x2d19dc\x2d7d94\x2d4c068923bb75.mount: Deactivated successfully. Nov 5 15:52:55.683922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112329210.mount: Deactivated successfully. Nov 5 15:52:55.783852 containerd[1640]: time="2025-11-05T15:52:55.783778579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:52:55.788376 containerd[1640]: time="2025-11-05T15:52:55.788332487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:55.801270 containerd[1640]: time="2025-11-05T15:52:55.801220051Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:55.833269 containerd[1640]: time="2025-11-05T15:52:55.833199395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:52:55.837087 containerd[1640]: time="2025-11-05T15:52:55.837031705Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.12992731s" Nov 5 15:52:55.837087 containerd[1640]: time="2025-11-05T15:52:55.837088027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:52:55.865345 containerd[1640]: time="2025-11-05T15:52:55.865297171Z" level=info msg="CreateContainer within sandbox \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:52:55.921115 containerd[1640]: time="2025-11-05T15:52:55.920402050Z" level=info msg="Container af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:52:55.921734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757932355.mount: Deactivated successfully. Nov 5 15:52:55.976572 containerd[1640]: time="2025-11-05T15:52:55.968087405Z" level=info msg="CreateContainer within sandbox \"73a4e527db6467ba38104c597ce7860a0da0726f14f1ea6556cd266e9095d3d9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\"" Nov 5 15:52:55.976572 containerd[1640]: time="2025-11-05T15:52:55.968993197Z" level=info msg="StartContainer for \"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\"" Nov 5 15:52:55.994537 containerd[1640]: time="2025-11-05T15:52:55.994435706Z" level=info msg="connecting to shim af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc" address="unix:///run/containerd/s/7a83a122f2b4b9ac0c43822550ef89348d7aea3b6d2194c927fb2dca7746fb36" protocol=ttrpc version=3 Nov 5 15:52:56.073497 systemd[1]: Started cri-containerd-af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc.scope - libcontainer container af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc. Nov 5 15:52:56.141684 containerd[1640]: time="2025-11-05T15:52:56.141636544Z" level=info msg="StartContainer for \"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" returns successfully" Nov 5 15:52:56.349207 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:52:56.353656 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:52:56.696752 kubelet[2881]: I1105 15:52:56.696518 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-backend-key-pair\") pod \"e18327f2-d68d-49ab-a654-9572aaec9d64\" (UID: \"e18327f2-d68d-49ab-a654-9572aaec9d64\") " Nov 5 15:52:56.696752 kubelet[2881]: I1105 15:52:56.696595 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-ca-bundle\") pod \"e18327f2-d68d-49ab-a654-9572aaec9d64\" (UID: \"e18327f2-d68d-49ab-a654-9572aaec9d64\") " Nov 5 15:52:56.696752 kubelet[2881]: I1105 15:52:56.696611 2881 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltrhf\" (UniqueName: \"kubernetes.io/projected/e18327f2-d68d-49ab-a654-9572aaec9d64-kube-api-access-ltrhf\") pod \"e18327f2-d68d-49ab-a654-9572aaec9d64\" (UID: \"e18327f2-d68d-49ab-a654-9572aaec9d64\") " Nov 5 15:52:56.701632 kubelet[2881]: I1105 15:52:56.701591 2881 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e18327f2-d68d-49ab-a654-9572aaec9d64" (UID: "e18327f2-d68d-49ab-a654-9572aaec9d64"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:52:56.713714 systemd[1]: var-lib-kubelet-pods-e18327f2\x2dd68d\x2d49ab\x2da654\x2d9572aaec9d64-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:52:56.719629 systemd[1]: var-lib-kubelet-pods-e18327f2\x2dd68d\x2d49ab\x2da654\x2d9572aaec9d64-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dltrhf.mount: Deactivated successfully. Nov 5 15:52:56.721624 kubelet[2881]: I1105 15:52:56.720982 2881 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e18327f2-d68d-49ab-a654-9572aaec9d64-kube-api-access-ltrhf" (OuterVolumeSpecName: "kube-api-access-ltrhf") pod "e18327f2-d68d-49ab-a654-9572aaec9d64" (UID: "e18327f2-d68d-49ab-a654-9572aaec9d64"). InnerVolumeSpecName "kube-api-access-ltrhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:52:56.722302 kubelet[2881]: I1105 15:52:56.722286 2881 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e18327f2-d68d-49ab-a654-9572aaec9d64" (UID: "e18327f2-d68d-49ab-a654-9572aaec9d64"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:52:56.799719 kubelet[2881]: I1105 15:52:56.799616 2881 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-backend-key-pair\") on node \"ci-4487-0-1-1-e8a5680daa\" DevicePath \"\"" Nov 5 15:52:56.800303 kubelet[2881]: I1105 15:52:56.800291 2881 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e18327f2-d68d-49ab-a654-9572aaec9d64-whisker-ca-bundle\") on node \"ci-4487-0-1-1-e8a5680daa\" DevicePath \"\"" Nov 5 15:52:56.800779 kubelet[2881]: I1105 15:52:56.800765 2881 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ltrhf\" (UniqueName: \"kubernetes.io/projected/e18327f2-d68d-49ab-a654-9572aaec9d64-kube-api-access-ltrhf\") on node \"ci-4487-0-1-1-e8a5680daa\" DevicePath \"\"" Nov 5 15:52:56.957206 containerd[1640]: time="2025-11-05T15:52:56.957000643Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" id:\"bd3d1546364d3ae98c30e315f41f142bd4ded38364d89d1f8fbedc95866187f8\" pid:3997 exit_status:1 exited_at:{seconds:1762357976 nanos:956473610}" Nov 5 15:52:57.029614 systemd[1]: Removed slice kubepods-besteffort-pode18327f2_d68d_49ab_a654_9572aaec9d64.slice - libcontainer container kubepods-besteffort-pode18327f2_d68d_49ab_a654_9572aaec9d64.slice. Nov 5 15:52:57.055189 kubelet[2881]: I1105 15:52:57.052373 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zbtz2" podStartSLOduration=2.612682632 podStartE2EDuration="19.052348808s" podCreationTimestamp="2025-11-05 15:52:38 +0000 UTC" firstStartedPulling="2025-11-05 15:52:39.398360213 +0000 UTC m=+20.048571894" lastFinishedPulling="2025-11-05 15:52:55.838026389 +0000 UTC m=+36.488238070" observedRunningTime="2025-11-05 15:52:56.754323925 +0000 UTC m=+37.404535616" watchObservedRunningTime="2025-11-05 15:52:57.052348808 +0000 UTC m=+37.702560539" Nov 5 15:52:57.144415 systemd[1]: Created slice kubepods-besteffort-pod6c65c2f5_d1b1_431c_b576_9bb521d35cd6.slice - libcontainer container kubepods-besteffort-pod6c65c2f5_d1b1_431c_b576_9bb521d35cd6.slice. Nov 5 15:52:57.205415 kubelet[2881]: I1105 15:52:57.205326 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c65c2f5-d1b1-431c-b576-9bb521d35cd6-whisker-ca-bundle\") pod \"whisker-6847488456-ddlsx\" (UID: \"6c65c2f5-d1b1-431c-b576-9bb521d35cd6\") " pod="calico-system/whisker-6847488456-ddlsx" Nov 5 15:52:57.205415 kubelet[2881]: I1105 15:52:57.205391 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c65c2f5-d1b1-431c-b576-9bb521d35cd6-whisker-backend-key-pair\") pod \"whisker-6847488456-ddlsx\" (UID: \"6c65c2f5-d1b1-431c-b576-9bb521d35cd6\") " pod="calico-system/whisker-6847488456-ddlsx" Nov 5 15:52:57.205415 kubelet[2881]: I1105 15:52:57.205414 2881 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97jq\" (UniqueName: \"kubernetes.io/projected/6c65c2f5-d1b1-431c-b576-9bb521d35cd6-kube-api-access-s97jq\") pod \"whisker-6847488456-ddlsx\" (UID: \"6c65c2f5-d1b1-431c-b576-9bb521d35cd6\") " pod="calico-system/whisker-6847488456-ddlsx" Nov 5 15:52:57.449865 containerd[1640]: time="2025-11-05T15:52:57.449809086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6847488456-ddlsx,Uid:6c65c2f5-d1b1-431c-b576-9bb521d35cd6,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:57.508025 kubelet[2881]: I1105 15:52:57.507875 2881 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e18327f2-d68d-49ab-a654-9572aaec9d64" path="/var/lib/kubelet/pods/e18327f2-d68d-49ab-a654-9572aaec9d64/volumes" Nov 5 15:52:57.862353 systemd-networkd[1509]: calic14bedfa6ad: Link UP Nov 5 15:52:57.862673 systemd-networkd[1509]: calic14bedfa6ad: Gained carrier Nov 5 15:52:57.889528 containerd[1640]: 2025-11-05 15:52:57.514 [INFO][4010] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:52:57.889528 containerd[1640]: 2025-11-05 15:52:57.555 [INFO][4010] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0 whisker-6847488456- calico-system 6c65c2f5-d1b1-431c-b576-9bb521d35cd6 886 0 2025-11-05 15:52:57 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6847488456 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa whisker-6847488456-ddlsx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic14bedfa6ad [] [] }} ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-" Nov 5 15:52:57.889528 containerd[1640]: 2025-11-05 15:52:57.555 [INFO][4010] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:57.889528 containerd[1640]: 2025-11-05 15:52:57.742 [INFO][4023] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" HandleID="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Workload="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.743 [INFO][4023] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" HandleID="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Workload="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102e00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"whisker-6847488456-ddlsx", "timestamp":"2025-11-05 15:52:57.742734192 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.744 [INFO][4023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.744 [INFO][4023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.745 [INFO][4023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.766 [INFO][4023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.782 [INFO][4023] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.791 [INFO][4023] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.794 [INFO][4023] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.889780 containerd[1640]: 2025-11-05 15:52:57.799 [INFO][4023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.799 [INFO][4023] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.802 [INFO][4023] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.809 [INFO][4023] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.820 [INFO][4023] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.65/26] block=192.168.51.64/26 handle="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.820 [INFO][4023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.65/26] handle="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.820 [INFO][4023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:52:57.892004 containerd[1640]: 2025-11-05 15:52:57.820 [INFO][4023] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.65/26] IPv6=[] ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" HandleID="k8s-pod-network.bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Workload="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:57.892142 containerd[1640]: 2025-11-05 15:52:57.827 [INFO][4010] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0", GenerateName:"whisker-6847488456-", Namespace:"calico-system", SelfLink:"", UID:"6c65c2f5-d1b1-431c-b576-9bb521d35cd6", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6847488456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"whisker-6847488456-ddlsx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic14bedfa6ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:52:57.892142 containerd[1640]: 2025-11-05 15:52:57.827 [INFO][4010] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.65/32] ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:57.892865 containerd[1640]: 2025-11-05 15:52:57.827 [INFO][4010] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic14bedfa6ad ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:57.892865 containerd[1640]: 2025-11-05 15:52:57.859 [INFO][4010] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:57.894021 containerd[1640]: 2025-11-05 15:52:57.861 [INFO][4010] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0", GenerateName:"whisker-6847488456-", Namespace:"calico-system", SelfLink:"", UID:"6c65c2f5-d1b1-431c-b576-9bb521d35cd6", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6847488456", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c", Pod:"whisker-6847488456-ddlsx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic14bedfa6ad", MAC:"da:23:24:14:16:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:52:57.895006 containerd[1640]: 2025-11-05 15:52:57.883 [INFO][4010] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" Namespace="calico-system" Pod="whisker-6847488456-ddlsx" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-whisker--6847488456--ddlsx-eth0" Nov 5 15:52:58.096187 containerd[1640]: time="2025-11-05T15:52:58.096037341Z" level=info msg="connecting to shim bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c" address="unix:///run/containerd/s/25de86594fbba4f6b3abbf589badb4ffeebd0781a8f94a4601abd61d32f4fbeb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:58.122289 systemd[1]: Started cri-containerd-bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c.scope - libcontainer container bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c. Nov 5 15:52:58.133148 containerd[1640]: time="2025-11-05T15:52:58.133117274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" id:\"e007b89bf8dcae04e54a0a1f3bcefa63b217ef5bc01369365d15fecd04758b4e\" pid:4041 exit_status:1 exited_at:{seconds:1762357978 nanos:130442774}" Nov 5 15:52:58.189851 containerd[1640]: time="2025-11-05T15:52:58.189791582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6847488456-ddlsx,Uid:6c65c2f5-d1b1-431c-b576-9bb521d35cd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc10b615bf406a59dbddc2aba7bcacc55db109cb6db28b3e611767202ff0a19c\"" Nov 5 15:52:58.195157 containerd[1640]: time="2025-11-05T15:52:58.195134204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:52:58.566082 systemd-networkd[1509]: vxlan.calico: Link UP Nov 5 15:52:58.566101 systemd-networkd[1509]: vxlan.calico: Gained carrier Nov 5 15:52:58.747369 containerd[1640]: time="2025-11-05T15:52:58.747325106Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:58.748833 containerd[1640]: time="2025-11-05T15:52:58.748790459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:52:58.749025 containerd[1640]: time="2025-11-05T15:52:58.748670816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:52:58.749528 kubelet[2881]: E1105 15:52:58.749332 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:52:58.749528 kubelet[2881]: E1105 15:52:58.749395 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:52:58.772828 kubelet[2881]: E1105 15:52:58.772709 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fbf7bd05464430797d5c3831755c3d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:58.776755 containerd[1640]: time="2025-11-05T15:52:58.776627002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:52:59.277160 containerd[1640]: time="2025-11-05T15:52:59.277031968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:52:59.278504 containerd[1640]: time="2025-11-05T15:52:59.278442330Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:52:59.278592 containerd[1640]: time="2025-11-05T15:52:59.278468721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:52:59.279101 kubelet[2881]: E1105 15:52:59.278805 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:52:59.279101 kubelet[2881]: E1105 15:52:59.278853 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:52:59.279958 kubelet[2881]: E1105 15:52:59.278987 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:52:59.286881 kubelet[2881]: E1105 15:52:59.286744 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:52:59.356422 systemd-networkd[1509]: calic14bedfa6ad: Gained IPv6LL Nov 5 15:52:59.504272 containerd[1640]: time="2025-11-05T15:52:59.504138054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lp6gv,Uid:882e95b9-8bc5-46aa-95f2-d288e79d54ed,Namespace:calico-system,Attempt:0,}" Nov 5 15:52:59.612900 systemd-networkd[1509]: vxlan.calico: Gained IPv6LL Nov 5 15:52:59.665548 systemd-networkd[1509]: cali4a0c210472a: Link UP Nov 5 15:52:59.666574 systemd-networkd[1509]: cali4a0c210472a: Gained carrier Nov 5 15:52:59.684823 containerd[1640]: 2025-11-05 15:52:59.550 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0 goldmane-666569f655- calico-system 882e95b9-8bc5-46aa-95f2-d288e79d54ed 817 0 2025-11-05 15:52:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa goldmane-666569f655-lp6gv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4a0c210472a [] [] }} ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-" Nov 5 15:52:59.684823 containerd[1640]: 2025-11-05 15:52:59.551 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.684823 containerd[1640]: 2025-11-05 15:52:59.602 [INFO][4304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" HandleID="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Workload="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.602 [INFO][4304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" HandleID="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Workload="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5700), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"goldmane-666569f655-lp6gv", "timestamp":"2025-11-05 15:52:59.602338342 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.602 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.602 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.602 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.613 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.621 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.630 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.636 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685063 containerd[1640]: 2025-11-05 15:52:59.642 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.642 [INFO][4304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.645 [INFO][4304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3 Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.652 [INFO][4304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.658 [INFO][4304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.66/26] block=192.168.51.64/26 handle="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.658 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.66/26] handle="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.658 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:52:59.685665 containerd[1640]: 2025-11-05 15:52:59.658 [INFO][4304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.66/26] IPv6=[] ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" HandleID="k8s-pod-network.7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Workload="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.685812 containerd[1640]: 2025-11-05 15:52:59.661 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"882e95b9-8bc5-46aa-95f2-d288e79d54ed", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"goldmane-666569f655-lp6gv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a0c210472a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:52:59.685874 containerd[1640]: 2025-11-05 15:52:59.661 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.66/32] ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.685874 containerd[1640]: 2025-11-05 15:52:59.661 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a0c210472a ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.685874 containerd[1640]: 2025-11-05 15:52:59.666 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.685944 containerd[1640]: 2025-11-05 15:52:59.667 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"882e95b9-8bc5-46aa-95f2-d288e79d54ed", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3", Pod:"goldmane-666569f655-lp6gv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a0c210472a", MAC:"1a:e4:71:2b:e2:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:52:59.686380 containerd[1640]: 2025-11-05 15:52:59.679 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" Namespace="calico-system" Pod="goldmane-666569f655-lp6gv" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-goldmane--666569f655--lp6gv-eth0" Nov 5 15:52:59.709801 containerd[1640]: time="2025-11-05T15:52:59.709763658Z" level=info msg="connecting to shim 7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3" address="unix:///run/containerd/s/725085396d8de13b702aa6e2d2cdd01379a952c209eacc93100116f8bda1702e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:52:59.726280 systemd[1]: Started cri-containerd-7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3.scope - libcontainer container 7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3. Nov 5 15:52:59.763742 containerd[1640]: time="2025-11-05T15:52:59.763695810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-lp6gv,Uid:882e95b9-8bc5-46aa-95f2-d288e79d54ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"7abe427996cbea767fbb8b0a3028e46bfe46529eef47ae3e6c38b2dae59230e3\"" Nov 5 15:52:59.769057 kubelet[2881]: E1105 15:52:59.768995 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:52:59.769359 containerd[1640]: time="2025-11-05T15:52:59.768964798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:53:00.221377 containerd[1640]: time="2025-11-05T15:53:00.221296018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:00.223072 containerd[1640]: time="2025-11-05T15:53:00.222944334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:53:00.223072 containerd[1640]: time="2025-11-05T15:53:00.223026666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:00.223813 kubelet[2881]: E1105 15:53:00.223339 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:00.223813 kubelet[2881]: E1105 15:53:00.223402 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:00.224308 kubelet[2881]: E1105 15:53:00.223590 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5ql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lp6gv_calico-system(882e95b9-8bc5-46aa-95f2-d288e79d54ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:00.225825 kubelet[2881]: E1105 15:53:00.225730 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:00.502025 containerd[1640]: time="2025-11-05T15:53:00.501883981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmgb9,Uid:b030d0a4-2581-4144-a827-7d0dc3133cf3,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:00.688258 systemd-networkd[1509]: cali681fe6d9795: Link UP Nov 5 15:53:00.689697 systemd-networkd[1509]: cali681fe6d9795: Gained carrier Nov 5 15:53:00.711540 containerd[1640]: 2025-11-05 15:53:00.585 [INFO][4371] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0 csi-node-driver- calico-system b030d0a4-2581-4144-a827-7d0dc3133cf3 703 0 2025-11-05 15:52:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa csi-node-driver-zmgb9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali681fe6d9795 [] [] }} ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-" Nov 5 15:53:00.711540 containerd[1640]: 2025-11-05 15:53:00.586 [INFO][4371] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.711540 containerd[1640]: 2025-11-05 15:53:00.625 [INFO][4384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" HandleID="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Workload="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.626 [INFO][4384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" HandleID="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Workload="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"csi-node-driver-zmgb9", "timestamp":"2025-11-05 15:53:00.625913531 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.626 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.626 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.626 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.640 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.647 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.655 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.657 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.712779 containerd[1640]: 2025-11-05 15:53:00.660 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.660 [INFO][4384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.662 [INFO][4384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855 Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.668 [INFO][4384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.677 [INFO][4384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.67/26] block=192.168.51.64/26 handle="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.677 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.67/26] handle="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.677 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:00.714369 containerd[1640]: 2025-11-05 15:53:00.677 [INFO][4384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.67/26] IPv6=[] ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" HandleID="k8s-pod-network.6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Workload="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.714807 containerd[1640]: 2025-11-05 15:53:00.682 [INFO][4371] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b030d0a4-2581-4144-a827-7d0dc3133cf3", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"csi-node-driver-zmgb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali681fe6d9795", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:00.714918 containerd[1640]: 2025-11-05 15:53:00.683 [INFO][4371] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.67/32] ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.714918 containerd[1640]: 2025-11-05 15:53:00.683 [INFO][4371] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali681fe6d9795 ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.714918 containerd[1640]: 2025-11-05 15:53:00.691 [INFO][4371] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.715214 containerd[1640]: 2025-11-05 15:53:00.691 [INFO][4371] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b030d0a4-2581-4144-a827-7d0dc3133cf3", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855", Pod:"csi-node-driver-zmgb9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali681fe6d9795", MAC:"56:e8:92:e9:f7:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:00.715734 containerd[1640]: 2025-11-05 15:53:00.704 [INFO][4371] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" Namespace="calico-system" Pod="csi-node-driver-zmgb9" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-csi--node--driver--zmgb9-eth0" Nov 5 15:53:00.757316 containerd[1640]: time="2025-11-05T15:53:00.757052730Z" level=info msg="connecting to shim 6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855" address="unix:///run/containerd/s/03e7326c6118e2ab4b3a0edd0318e8bca0ac5385d3bbd77e40c279583ba80e40" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:00.775583 kubelet[2881]: E1105 15:53:00.775465 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:00.817701 systemd[1]: Started cri-containerd-6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855.scope - libcontainer container 6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855. Nov 5 15:53:00.849893 containerd[1640]: time="2025-11-05T15:53:00.849817973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zmgb9,Uid:b030d0a4-2581-4144-a827-7d0dc3133cf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6422991ab77f9897465404b4234a1a18777f51e20e1bd2bd26b942ed43ce4855\"" Nov 5 15:53:00.852321 containerd[1640]: time="2025-11-05T15:53:00.852008102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:53:01.276430 systemd-networkd[1509]: cali4a0c210472a: Gained IPv6LL Nov 5 15:53:01.287942 containerd[1640]: time="2025-11-05T15:53:01.287884579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:01.291062 containerd[1640]: time="2025-11-05T15:53:01.290924996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:53:01.291062 containerd[1640]: time="2025-11-05T15:53:01.291031328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:53:01.291846 kubelet[2881]: E1105 15:53:01.291720 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:01.292207 kubelet[2881]: E1105 15:53:01.291822 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:01.292936 kubelet[2881]: E1105 15:53:01.292628 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:01.299952 containerd[1640]: time="2025-11-05T15:53:01.299533366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:53:01.502506 containerd[1640]: time="2025-11-05T15:53:01.502281683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-lq899,Uid:ed0296fa-6511-43dd-8149-4b0a2a3f1b39,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:53:01.502988 containerd[1640]: time="2025-11-05T15:53:01.502282443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-2db4p,Uid:cac08146-1ce4-4e81-8229-9518d20f8fa6,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:53:01.665134 systemd-networkd[1509]: cali692cf657d55: Link UP Nov 5 15:53:01.665914 systemd-networkd[1509]: cali692cf657d55: Gained carrier Nov 5 15:53:01.678050 containerd[1640]: 2025-11-05 15:53:01.587 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0 calico-apiserver-6c6845c454- calico-apiserver ed0296fa-6511-43dd-8149-4b0a2a3f1b39 812 0 2025-11-05 15:52:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6845c454 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa calico-apiserver-6c6845c454-lq899 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali692cf657d55 [] [] }} ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-" Nov 5 15:53:01.678050 containerd[1640]: 2025-11-05 15:53:01.588 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.678050 containerd[1640]: 2025-11-05 15:53:01.632 [INFO][4476] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" HandleID="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.632 [INFO][4476] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" HandleID="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"calico-apiserver-6c6845c454-lq899", "timestamp":"2025-11-05 15:53:01.632740578 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.632 [INFO][4476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.632 [INFO][4476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.632 [INFO][4476] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.638 [INFO][4476] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.641 [INFO][4476] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.645 [INFO][4476] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.646 [INFO][4476] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678309 containerd[1640]: 2025-11-05 15:53:01.648 [INFO][4476] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.648 [INFO][4476] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.649 [INFO][4476] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3 Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.653 [INFO][4476] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.657 [INFO][4476] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.68/26] block=192.168.51.64/26 handle="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.657 [INFO][4476] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.68/26] handle="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.657 [INFO][4476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:01.678821 containerd[1640]: 2025-11-05 15:53:01.657 [INFO][4476] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.68/26] IPv6=[] ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" HandleID="k8s-pod-network.ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.678920 containerd[1640]: 2025-11-05 15:53:01.661 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0", GenerateName:"calico-apiserver-6c6845c454-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed0296fa-6511-43dd-8149-4b0a2a3f1b39", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6845c454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"calico-apiserver-6c6845c454-lq899", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali692cf657d55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:01.678960 containerd[1640]: 2025-11-05 15:53:01.661 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.68/32] ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.678960 containerd[1640]: 2025-11-05 15:53:01.661 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali692cf657d55 ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.678960 containerd[1640]: 2025-11-05 15:53:01.664 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.679003 containerd[1640]: 2025-11-05 15:53:01.666 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0", GenerateName:"calico-apiserver-6c6845c454-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed0296fa-6511-43dd-8149-4b0a2a3f1b39", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6845c454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3", Pod:"calico-apiserver-6c6845c454-lq899", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali692cf657d55", MAC:"06:b3:0d:d0:84:95", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:01.679036 containerd[1640]: 2025-11-05 15:53:01.675 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-lq899" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--lq899-eth0" Nov 5 15:53:01.709728 containerd[1640]: time="2025-11-05T15:53:01.709059589Z" level=info msg="connecting to shim ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3" address="unix:///run/containerd/s/754d0ec35f27b825a5cf1ef5e79419bc90470f8dd47f8c143ac8073ac9a6fc36" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:01.728450 containerd[1640]: time="2025-11-05T15:53:01.728413626Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:01.729577 containerd[1640]: time="2025-11-05T15:53:01.729541991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:53:01.729657 containerd[1640]: time="2025-11-05T15:53:01.729627553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:53:01.729804 kubelet[2881]: E1105 15:53:01.729773 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:01.729901 kubelet[2881]: E1105 15:53:01.729887 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:01.730456 systemd[1]: Started cri-containerd-ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3.scope - libcontainer container ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3. Nov 5 15:53:01.735263 kubelet[2881]: E1105 15:53:01.734596 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:01.736736 kubelet[2881]: E1105 15:53:01.736667 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:01.776663 kubelet[2881]: E1105 15:53:01.776593 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:01.777067 kubelet[2881]: E1105 15:53:01.776756 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:01.800175 systemd-networkd[1509]: cali8365d0fb945: Link UP Nov 5 15:53:01.800843 systemd-networkd[1509]: cali8365d0fb945: Gained carrier Nov 5 15:53:01.815975 containerd[1640]: time="2025-11-05T15:53:01.815930475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-lq899,Uid:ed0296fa-6511-43dd-8149-4b0a2a3f1b39,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce093268eca374a6a41ce13d8710d8508fb921051aea52a58091ef6b215081a3\"" Nov 5 15:53:01.818347 containerd[1640]: time="2025-11-05T15:53:01.818291877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:01.820263 containerd[1640]: 2025-11-05 15:53:01.564 [INFO][4447] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0 calico-apiserver-6c6845c454- calico-apiserver cac08146-1ce4-4e81-8229-9518d20f8fa6 818 0 2025-11-05 15:52:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c6845c454 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa calico-apiserver-6c6845c454-2db4p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8365d0fb945 [] [] }} ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-" Nov 5 15:53:01.820263 containerd[1640]: 2025-11-05 15:53:01.581 [INFO][4447] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.820263 containerd[1640]: 2025-11-05 15:53:01.652 [INFO][4471] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" HandleID="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.653 [INFO][4471] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" HandleID="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f7e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"calico-apiserver-6c6845c454-2db4p", "timestamp":"2025-11-05 15:53:01.65279857 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.653 [INFO][4471] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.658 [INFO][4471] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.658 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.747 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.756 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.761 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.763 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.820574 containerd[1640]: 2025-11-05 15:53:01.765 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.765 [INFO][4471] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.766 [INFO][4471] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477 Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.770 [INFO][4471] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.781 [INFO][4471] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.69/26] block=192.168.51.64/26 handle="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.782 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.69/26] handle="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.782 [INFO][4471] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:01.821021 containerd[1640]: 2025-11-05 15:53:01.783 [INFO][4471] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.69/26] IPv6=[] ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" HandleID="k8s-pod-network.01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.821117 containerd[1640]: 2025-11-05 15:53:01.792 [INFO][4447] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0", GenerateName:"calico-apiserver-6c6845c454-", Namespace:"calico-apiserver", SelfLink:"", UID:"cac08146-1ce4-4e81-8229-9518d20f8fa6", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6845c454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"calico-apiserver-6c6845c454-2db4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8365d0fb945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:01.821156 containerd[1640]: 2025-11-05 15:53:01.792 [INFO][4447] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.69/32] ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.821156 containerd[1640]: 2025-11-05 15:53:01.793 [INFO][4447] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8365d0fb945 ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.821156 containerd[1640]: 2025-11-05 15:53:01.802 [INFO][4447] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.821245 containerd[1640]: 2025-11-05 15:53:01.803 [INFO][4447] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0", GenerateName:"calico-apiserver-6c6845c454-", Namespace:"calico-apiserver", SelfLink:"", UID:"cac08146-1ce4-4e81-8229-9518d20f8fa6", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c6845c454", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477", Pod:"calico-apiserver-6c6845c454-2db4p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8365d0fb945", MAC:"ee:19:d9:c9:b2:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:01.821281 containerd[1640]: 2025-11-05 15:53:01.814 [INFO][4447] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" Namespace="calico-apiserver" Pod="calico-apiserver-6c6845c454-2db4p" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--apiserver--6c6845c454--2db4p-eth0" Nov 5 15:53:01.844343 containerd[1640]: time="2025-11-05T15:53:01.844269758Z" level=info msg="connecting to shim 01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477" address="unix:///run/containerd/s/f046c605d9d926c8dae99ea231336732c6adb58d054801fd1e085e01eb850c22" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:01.871291 systemd[1]: Started cri-containerd-01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477.scope - libcontainer container 01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477. Nov 5 15:53:01.907943 containerd[1640]: time="2025-11-05T15:53:01.907873780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6845c454-2db4p,Uid:cac08146-1ce4-4e81-8229-9518d20f8fa6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"01da649e3c7dd3796a1081cd578d2688ff6461315ee2705fb498f1bf2881c477\"" Nov 5 15:53:01.980422 systemd-networkd[1509]: cali681fe6d9795: Gained IPv6LL Nov 5 15:53:02.258767 containerd[1640]: time="2025-11-05T15:53:02.258540205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:02.260276 containerd[1640]: time="2025-11-05T15:53:02.260128030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:02.260276 containerd[1640]: time="2025-11-05T15:53:02.260229472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:02.260571 kubelet[2881]: E1105 15:53:02.260515 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:02.260994 kubelet[2881]: E1105 15:53:02.260586 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:02.261070 kubelet[2881]: E1105 15:53:02.260980 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8m87n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-lq899_calico-apiserver(ed0296fa-6511-43dd-8149-4b0a2a3f1b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:02.262148 containerd[1640]: time="2025-11-05T15:53:02.261926908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:02.262772 kubelet[2881]: E1105 15:53:02.262316 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:02.501069 containerd[1640]: time="2025-11-05T15:53:02.501033578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtx6k,Uid:a36eb2db-7320-42f2-91ec-0f462f8e66f5,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:02.501604 containerd[1640]: time="2025-11-05T15:53:02.501301074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c8bcf4d4-sxk8m,Uid:53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6,Namespace:calico-system,Attempt:0,}" Nov 5 15:53:02.501677 containerd[1640]: time="2025-11-05T15:53:02.501353875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpxj6,Uid:b0705c42-1f08-4eb9-bf1f-7320280b9c42,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:02.664320 systemd-networkd[1509]: cali9b4fa7e8702: Link UP Nov 5 15:53:02.666148 systemd-networkd[1509]: cali9b4fa7e8702: Gained carrier Nov 5 15:53:02.687655 containerd[1640]: 2025-11-05 15:53:02.581 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0 coredns-674b8bbfcf- kube-system b0705c42-1f08-4eb9-bf1f-7320280b9c42 809 0 2025-11-05 15:52:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa coredns-674b8bbfcf-vpxj6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b4fa7e8702 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-" Nov 5 15:53:02.687655 containerd[1640]: 2025-11-05 15:53:02.581 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.687655 containerd[1640]: 2025-11-05 15:53:02.614 [INFO][4631] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" HandleID="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Workload="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.615 [INFO][4631] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" HandleID="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Workload="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5180), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"coredns-674b8bbfcf-vpxj6", "timestamp":"2025-11-05 15:53:02.614433184 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.615 [INFO][4631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.615 [INFO][4631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.615 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.632 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.636 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.642 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.643 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688118 containerd[1640]: 2025-11-05 15:53:02.646 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.646 [INFO][4631] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.647 [INFO][4631] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126 Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.651 [INFO][4631] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.657 [INFO][4631] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.70/26] block=192.168.51.64/26 handle="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.657 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.70/26] handle="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.657 [INFO][4631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:02.688295 containerd[1640]: 2025-11-05 15:53:02.657 [INFO][4631] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.70/26] IPv6=[] ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" HandleID="k8s-pod-network.97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Workload="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.689431 containerd[1640]: 2025-11-05 15:53:02.659 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b0705c42-1f08-4eb9-bf1f-7320280b9c42", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"coredns-674b8bbfcf-vpxj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b4fa7e8702", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:02.689431 containerd[1640]: 2025-11-05 15:53:02.660 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.70/32] ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.689431 containerd[1640]: 2025-11-05 15:53:02.660 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b4fa7e8702 ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.689431 containerd[1640]: 2025-11-05 15:53:02.668 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.689431 containerd[1640]: 2025-11-05 15:53:02.668 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b0705c42-1f08-4eb9-bf1f-7320280b9c42", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126", Pod:"coredns-674b8bbfcf-vpxj6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b4fa7e8702", MAC:"b2:77:ea:1b:d8:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:02.689431 containerd[1640]: 2025-11-05 15:53:02.681 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" Namespace="kube-system" Pod="coredns-674b8bbfcf-vpxj6" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--vpxj6-eth0" Nov 5 15:53:02.713454 containerd[1640]: time="2025-11-05T15:53:02.713257281Z" level=info msg="connecting to shim 97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126" address="unix:///run/containerd/s/74082853dbddceaad3429f06c97025bda120e94b161bb25fc5644bdae62bb1d5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:02.720068 containerd[1640]: time="2025-11-05T15:53:02.720048959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:02.721523 containerd[1640]: time="2025-11-05T15:53:02.721493501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:02.721678 containerd[1640]: time="2025-11-05T15:53:02.721602514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:02.722564 kubelet[2881]: E1105 15:53:02.721875 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:02.722564 kubelet[2881]: E1105 15:53:02.721940 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:02.727593 kubelet[2881]: E1105 15:53:02.727535 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cw5dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-2db4p_calico-apiserver(cac08146-1ce4-4e81-8229-9518d20f8fa6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:02.728924 kubelet[2881]: E1105 15:53:02.728744 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:53:02.747325 systemd[1]: Started cri-containerd-97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126.scope - libcontainer container 97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126. Nov 5 15:53:02.781991 systemd-networkd[1509]: calia75376374ef: Link UP Nov 5 15:53:02.784223 systemd-networkd[1509]: calia75376374ef: Gained carrier Nov 5 15:53:02.797722 kubelet[2881]: E1105 15:53:02.797680 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:53:02.799469 kubelet[2881]: E1105 15:53:02.799398 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:02.801435 kubelet[2881]: E1105 15:53:02.800961 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.592 [INFO][4605] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0 calico-kube-controllers-78c8bcf4d4- calico-system 53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6 815 0 2025-11-05 15:52:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78c8bcf4d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa calico-kube-controllers-78c8bcf4d4-sxk8m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia75376374ef [] [] }} ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.593 [INFO][4605] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.625 [INFO][4638] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" HandleID="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.625 [INFO][4638] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" HandleID="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf760), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"calico-kube-controllers-78c8bcf4d4-sxk8m", "timestamp":"2025-11-05 15:53:02.625116327 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.625 [INFO][4638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.657 [INFO][4638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.657 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.734 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.740 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.746 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.751 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.754 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.754 [INFO][4638] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.755 [INFO][4638] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6 Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.759 [INFO][4638] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.768 [INFO][4638] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.71/26] block=192.168.51.64/26 handle="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.768 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.71/26] handle="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.768 [INFO][4638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:02.821761 containerd[1640]: 2025-11-05 15:53:02.768 [INFO][4638] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.71/26] IPv6=[] ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" HandleID="k8s-pod-network.ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Workload="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.822472 containerd[1640]: 2025-11-05 15:53:02.773 [INFO][4605] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0", GenerateName:"calico-kube-controllers-78c8bcf4d4-", Namespace:"calico-system", SelfLink:"", UID:"53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c8bcf4d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"calico-kube-controllers-78c8bcf4d4-sxk8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia75376374ef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:02.822472 containerd[1640]: 2025-11-05 15:53:02.774 [INFO][4605] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.71/32] ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.822472 containerd[1640]: 2025-11-05 15:53:02.774 [INFO][4605] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia75376374ef ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.822472 containerd[1640]: 2025-11-05 15:53:02.788 [INFO][4605] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.822472 containerd[1640]: 2025-11-05 15:53:02.791 [INFO][4605] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0", GenerateName:"calico-kube-controllers-78c8bcf4d4-", Namespace:"calico-system", SelfLink:"", UID:"53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78c8bcf4d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6", Pod:"calico-kube-controllers-78c8bcf4d4-sxk8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia75376374ef", MAC:"02:91:42:3b:fa:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:02.822472 containerd[1640]: 2025-11-05 15:53:02.817 [INFO][4605] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" Namespace="calico-system" Pod="calico-kube-controllers-78c8bcf4d4-sxk8m" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-calico--kube--controllers--78c8bcf4d4--sxk8m-eth0" Nov 5 15:53:02.859267 containerd[1640]: time="2025-11-05T15:53:02.859154086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vpxj6,Uid:b0705c42-1f08-4eb9-bf1f-7320280b9c42,Namespace:kube-system,Attempt:0,} returns sandbox id \"97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126\"" Nov 5 15:53:02.868534 containerd[1640]: time="2025-11-05T15:53:02.868502220Z" level=info msg="connecting to shim ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6" address="unix:///run/containerd/s/58d136ad55b32037855701a406992d190bd01aba3d94dbcd22210929fc09803b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:02.900295 systemd[1]: Started cri-containerd-ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6.scope - libcontainer container ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6. Nov 5 15:53:02.914128 containerd[1640]: time="2025-11-05T15:53:02.914063595Z" level=info msg="CreateContainer within sandbox \"97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:53:02.920947 systemd-networkd[1509]: cali291e4dbda13: Link UP Nov 5 15:53:02.924186 systemd-networkd[1509]: cali291e4dbda13: Gained carrier Nov 5 15:53:02.935396 containerd[1640]: time="2025-11-05T15:53:02.933217653Z" level=info msg="Container 4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:02.937090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285360985.mount: Deactivated successfully. Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.584 [INFO][4595] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0 coredns-674b8bbfcf- kube-system a36eb2db-7320-42f2-91ec-0f462f8e66f5 814 0 2025-11-05 15:52:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4487-0-1-1-e8a5680daa coredns-674b8bbfcf-gtx6k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali291e4dbda13 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.585 [INFO][4595] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.655 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" HandleID="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Workload="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.655 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" HandleID="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Workload="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4487-0-1-1-e8a5680daa", "pod":"coredns-674b8bbfcf-gtx6k", "timestamp":"2025-11-05 15:53:02.655428609 +0000 UTC"}, Hostname:"ci-4487-0-1-1-e8a5680daa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.655 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.768 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.768 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4487-0-1-1-e8a5680daa' Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.838 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.856 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.880 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.884 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.890 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.51.64/26 host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.890 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.51.64/26 handle="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.896 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.903 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.51.64/26 handle="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.912 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.51.72/26] block=192.168.51.64/26 handle="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.912 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.51.72/26] handle="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" host="ci-4487-0-1-1-e8a5680daa" Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.912 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:53:02.943685 containerd[1640]: 2025-11-05 15:53:02.912 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.51.72/26] IPv6=[] ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" HandleID="k8s-pod-network.9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Workload="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.944031 containerd[1640]: 2025-11-05 15:53:02.916 [INFO][4595] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a36eb2db-7320-42f2-91ec-0f462f8e66f5", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"", Pod:"coredns-674b8bbfcf-gtx6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali291e4dbda13", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:02.944031 containerd[1640]: 2025-11-05 15:53:02.916 [INFO][4595] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.72/32] ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.944031 containerd[1640]: 2025-11-05 15:53:02.916 [INFO][4595] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali291e4dbda13 ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.944031 containerd[1640]: 2025-11-05 15:53:02.925 [INFO][4595] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.944031 containerd[1640]: 2025-11-05 15:53:02.925 [INFO][4595] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a36eb2db-7320-42f2-91ec-0f462f8e66f5", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 52, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4487-0-1-1-e8a5680daa", ContainerID:"9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe", Pod:"coredns-674b8bbfcf-gtx6k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali291e4dbda13", MAC:"f2:32:01:c3:d5:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:53:02.944031 containerd[1640]: 2025-11-05 15:53:02.939 [INFO][4595] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" Namespace="kube-system" Pod="coredns-674b8bbfcf-gtx6k" WorkloadEndpoint="ci--4487--0--1--1--e8a5680daa-k8s-coredns--674b8bbfcf--gtx6k-eth0" Nov 5 15:53:02.947852 containerd[1640]: time="2025-11-05T15:53:02.947817142Z" level=info msg="CreateContainer within sandbox \"97dc513af0800f186e08d0f503ecd62b9f2d0b812792c47ff726303c039f0126\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff\"" Nov 5 15:53:02.949104 containerd[1640]: time="2025-11-05T15:53:02.949078510Z" level=info msg="StartContainer for \"4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff\"" Nov 5 15:53:02.951773 containerd[1640]: time="2025-11-05T15:53:02.951554264Z" level=info msg="connecting to shim 4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff" address="unix:///run/containerd/s/74082853dbddceaad3429f06c97025bda120e94b161bb25fc5644bdae62bb1d5" protocol=ttrpc version=3 Nov 5 15:53:02.978345 containerd[1640]: time="2025-11-05T15:53:02.978288087Z" level=info msg="connecting to shim 9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe" address="unix:///run/containerd/s/828f7b56fa6a7fbf93303cb4c92c8081288c9149653f74e5e93fc56e2faa72d0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:02.979293 systemd[1]: Started cri-containerd-4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff.scope - libcontainer container 4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff. Nov 5 15:53:03.004302 systemd-networkd[1509]: cali692cf657d55: Gained IPv6LL Nov 5 15:53:03.012371 systemd[1]: Started cri-containerd-9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe.scope - libcontainer container 9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe. Nov 5 15:53:03.032481 containerd[1640]: time="2025-11-05T15:53:03.032435363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78c8bcf4d4-sxk8m,Uid:53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebcb37bdcc35ce1aaca13ff424e9dbb02d2ca252f7ba43ed4daf9ad5ff5dacb6\"" Nov 5 15:53:03.036491 containerd[1640]: time="2025-11-05T15:53:03.036321317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:53:03.049777 containerd[1640]: time="2025-11-05T15:53:03.049753148Z" level=info msg="StartContainer for \"4304e3791c1f73fbcea946c667109355654c8f43633e88131e6750ac75b41eff\" returns successfully" Nov 5 15:53:03.075654 containerd[1640]: time="2025-11-05T15:53:03.075612907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gtx6k,Uid:a36eb2db-7320-42f2-91ec-0f462f8e66f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe\"" Nov 5 15:53:03.081823 containerd[1640]: time="2025-11-05T15:53:03.081751651Z" level=info msg="CreateContainer within sandbox \"9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:53:03.091366 containerd[1640]: time="2025-11-05T15:53:03.091339028Z" level=info msg="Container 19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:03.098804 containerd[1640]: time="2025-11-05T15:53:03.098761148Z" level=info msg="CreateContainer within sandbox \"9eb245d5a335263d449ba9b9d0bfa24be07c42392a041fd700271ac38b597cfe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10\"" Nov 5 15:53:03.099460 containerd[1640]: time="2025-11-05T15:53:03.099364291Z" level=info msg="StartContainer for \"19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10\"" Nov 5 15:53:03.100302 containerd[1640]: time="2025-11-05T15:53:03.100270671Z" level=info msg="connecting to shim 19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10" address="unix:///run/containerd/s/828f7b56fa6a7fbf93303cb4c92c8081288c9149653f74e5e93fc56e2faa72d0" protocol=ttrpc version=3 Nov 5 15:53:03.117362 systemd[1]: Started cri-containerd-19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10.scope - libcontainer container 19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10. Nov 5 15:53:03.148808 containerd[1640]: time="2025-11-05T15:53:03.148412103Z" level=info msg="StartContainer for \"19ca2166afc0c6a594d91803589dca20116fc51aab37f1eb292daf04b0236b10\" returns successfully" Nov 5 15:53:03.388465 systemd-networkd[1509]: cali8365d0fb945: Gained IPv6LL Nov 5 15:53:03.486121 containerd[1640]: time="2025-11-05T15:53:03.485981898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:03.487651 containerd[1640]: time="2025-11-05T15:53:03.487576843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:53:03.487759 containerd[1640]: time="2025-11-05T15:53:03.487713266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:03.488034 kubelet[2881]: E1105 15:53:03.487944 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:03.488128 kubelet[2881]: E1105 15:53:03.488026 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:03.488561 kubelet[2881]: E1105 15:53:03.488367 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78c8bcf4d4-sxk8m_calico-system(53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:03.490314 kubelet[2881]: E1105 15:53:03.490121 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:53:03.812306 kubelet[2881]: E1105 15:53:03.810457 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:53:03.814454 kubelet[2881]: E1105 15:53:03.814354 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:03.818551 kubelet[2881]: E1105 15:53:03.818265 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:53:03.874188 kubelet[2881]: I1105 15:53:03.864209 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gtx6k" podStartSLOduration=38.857430306 podStartE2EDuration="38.857430306s" podCreationTimestamp="2025-11-05 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:03.829322017 +0000 UTC m=+44.479533778" watchObservedRunningTime="2025-11-05 15:53:03.857430306 +0000 UTC m=+44.507642027" Nov 5 15:53:03.874395 kubelet[2881]: I1105 15:53:03.874368 2881 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vpxj6" podStartSLOduration=38.873149965 podStartE2EDuration="38.873149965s" podCreationTimestamp="2025-11-05 15:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:03.85439165 +0000 UTC m=+44.504603331" watchObservedRunningTime="2025-11-05 15:53:03.873149965 +0000 UTC m=+44.523361646" Nov 5 15:53:04.156641 systemd-networkd[1509]: cali9b4fa7e8702: Gained IPv6LL Nov 5 15:53:04.157678 systemd-networkd[1509]: cali291e4dbda13: Gained IPv6LL Nov 5 15:53:04.220645 systemd-networkd[1509]: calia75376374ef: Gained IPv6LL Nov 5 15:53:04.825336 kubelet[2881]: E1105 15:53:04.825150 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:53:12.502885 containerd[1640]: time="2025-11-05T15:53:12.502799301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:53:12.938494 containerd[1640]: time="2025-11-05T15:53:12.938233127Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:12.940018 containerd[1640]: time="2025-11-05T15:53:12.939870328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:53:12.940359 containerd[1640]: time="2025-11-05T15:53:12.939994805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:12.940860 kubelet[2881]: E1105 15:53:12.940762 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:12.940860 kubelet[2881]: E1105 15:53:12.940851 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:12.941449 kubelet[2881]: E1105 15:53:12.941089 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5ql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lp6gv_calico-system(882e95b9-8bc5-46aa-95f2-d288e79d54ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:12.943307 kubelet[2881]: E1105 15:53:12.943233 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:13.509498 containerd[1640]: time="2025-11-05T15:53:13.509423954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:53:14.005703 containerd[1640]: time="2025-11-05T15:53:14.005629328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:14.007347 containerd[1640]: time="2025-11-05T15:53:14.007278672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:53:14.007559 containerd[1640]: time="2025-11-05T15:53:14.007394510Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:53:14.007826 kubelet[2881]: E1105 15:53:14.007745 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:53:14.007826 kubelet[2881]: E1105 15:53:14.007814 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:53:14.009096 kubelet[2881]: E1105 15:53:14.007977 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fbf7bd05464430797d5c3831755c3d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:14.011200 containerd[1640]: time="2025-11-05T15:53:14.011044152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:53:14.456971 containerd[1640]: time="2025-11-05T15:53:14.456877894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:14.458680 containerd[1640]: time="2025-11-05T15:53:14.458603886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:53:14.458908 containerd[1640]: time="2025-11-05T15:53:14.458640495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:14.459049 kubelet[2881]: E1105 15:53:14.458988 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:53:14.459213 kubelet[2881]: E1105 15:53:14.459127 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:53:14.459390 kubelet[2881]: E1105 15:53:14.459296 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:14.461234 kubelet[2881]: E1105 15:53:14.461121 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:53:16.506088 containerd[1640]: time="2025-11-05T15:53:16.506042524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:53:16.944972 containerd[1640]: time="2025-11-05T15:53:16.944875331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:16.946800 containerd[1640]: time="2025-11-05T15:53:16.946652257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:53:16.946945 containerd[1640]: time="2025-11-05T15:53:16.946816904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:53:16.947197 kubelet[2881]: E1105 15:53:16.947091 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:16.948140 kubelet[2881]: E1105 15:53:16.947195 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:16.948140 kubelet[2881]: E1105 15:53:16.947348 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:16.949748 containerd[1640]: time="2025-11-05T15:53:16.949663899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:53:17.402842 containerd[1640]: time="2025-11-05T15:53:17.402765987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:17.404469 containerd[1640]: time="2025-11-05T15:53:17.404344098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:53:17.404469 containerd[1640]: time="2025-11-05T15:53:17.404390887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:53:17.404748 kubelet[2881]: E1105 15:53:17.404650 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:17.404748 kubelet[2881]: E1105 15:53:17.404707 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:17.404900 kubelet[2881]: E1105 15:53:17.404838 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:17.406138 kubelet[2881]: E1105 15:53:17.406088 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:17.505436 containerd[1640]: time="2025-11-05T15:53:17.505393502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:17.978554 containerd[1640]: time="2025-11-05T15:53:17.978428519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:17.980075 containerd[1640]: time="2025-11-05T15:53:17.980015671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:17.980264 containerd[1640]: time="2025-11-05T15:53:17.980136388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:17.980483 kubelet[2881]: E1105 15:53:17.980422 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:17.981685 kubelet[2881]: E1105 15:53:17.980516 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:17.981685 kubelet[2881]: E1105 15:53:17.980762 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8m87n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-lq899_calico-apiserver(ed0296fa-6511-43dd-8149-4b0a2a3f1b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:17.982357 kubelet[2881]: E1105 15:53:17.982282 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:18.503214 containerd[1640]: time="2025-11-05T15:53:18.502876101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:18.972572 containerd[1640]: time="2025-11-05T15:53:18.972441253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:18.975451 containerd[1640]: time="2025-11-05T15:53:18.975350432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:18.975618 containerd[1640]: time="2025-11-05T15:53:18.975513060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:18.975937 kubelet[2881]: E1105 15:53:18.975852 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:18.976104 kubelet[2881]: E1105 15:53:18.975933 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:18.976807 kubelet[2881]: E1105 15:53:18.976224 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cw5dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-2db4p_calico-apiserver(cac08146-1ce4-4e81-8229-9518d20f8fa6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:18.977574 kubelet[2881]: E1105 15:53:18.977534 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:53:20.502136 containerd[1640]: time="2025-11-05T15:53:20.501954613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:53:20.809892 systemd[1]: Started sshd@8-46.62.132.115:22-103.203.57.11:42124.service - OpenSSH per-connection server daemon (103.203.57.11:42124). Nov 5 15:53:20.926494 containerd[1640]: time="2025-11-05T15:53:20.926373172Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:20.927615 containerd[1640]: time="2025-11-05T15:53:20.927558474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:53:20.927743 containerd[1640]: time="2025-11-05T15:53:20.927657482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:20.927873 kubelet[2881]: E1105 15:53:20.927805 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:20.928398 kubelet[2881]: E1105 15:53:20.927873 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:20.928398 kubelet[2881]: E1105 15:53:20.928020 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78c8bcf4d4-sxk8m_calico-system(53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:20.929373 kubelet[2881]: E1105 15:53:20.929302 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:53:20.962509 sshd[4920]: Connection closed by 103.203.57.11 port 42124 Nov 5 15:53:20.963802 systemd[1]: sshd@8-46.62.132.115:22-103.203.57.11:42124.service: Deactivated successfully. Nov 5 15:53:27.838942 containerd[1640]: time="2025-11-05T15:53:27.838862506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" id:\"6ea7f0b8ec0e8aa1d39050ba03bcda14180208b64ed982376b93ed5e401aa9ed\" pid:4943 exited_at:{seconds:1762358007 nanos:838474630}" Nov 5 15:53:28.503920 kubelet[2881]: E1105 15:53:28.503845 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:28.506111 kubelet[2881]: E1105 15:53:28.505722 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:28.506111 kubelet[2881]: E1105 15:53:28.505837 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:53:29.511606 kubelet[2881]: E1105 15:53:29.510839 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:29.511606 kubelet[2881]: E1105 15:53:29.511538 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:53:31.724017 systemd[1]: Started sshd@9-46.62.132.115:22-103.179.184.39:56544.service - OpenSSH per-connection server daemon (103.179.184.39:56544). Nov 5 15:53:33.371669 sshd[4956]: Connection closed by authenticating user root 103.179.184.39 port 56544 [preauth] Nov 5 15:53:33.374673 systemd[1]: sshd@9-46.62.132.115:22-103.179.184.39:56544.service: Deactivated successfully. Nov 5 15:53:34.502016 kubelet[2881]: E1105 15:53:34.501974 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:53:39.504851 containerd[1640]: time="2025-11-05T15:53:39.504646919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:53:39.974533 containerd[1640]: time="2025-11-05T15:53:39.974437575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:39.976368 containerd[1640]: time="2025-11-05T15:53:39.976286281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:53:39.976709 containerd[1640]: time="2025-11-05T15:53:39.976330681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:53:39.976819 kubelet[2881]: E1105 15:53:39.976639 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:39.976819 kubelet[2881]: E1105 15:53:39.976698 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:53:39.977997 kubelet[2881]: E1105 15:53:39.976885 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:39.979567 containerd[1640]: time="2025-11-05T15:53:39.979535834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:53:40.423600 containerd[1640]: time="2025-11-05T15:53:40.423512970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:40.425043 containerd[1640]: time="2025-11-05T15:53:40.424972868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:53:40.425158 containerd[1640]: time="2025-11-05T15:53:40.425082218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:53:40.425505 kubelet[2881]: E1105 15:53:40.425450 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:40.425603 kubelet[2881]: E1105 15:53:40.425525 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:53:40.425748 kubelet[2881]: E1105 15:53:40.425694 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:40.427368 kubelet[2881]: E1105 15:53:40.427304 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:42.501856 containerd[1640]: time="2025-11-05T15:53:42.501610449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:53:42.955902 containerd[1640]: time="2025-11-05T15:53:42.955774460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:42.957284 containerd[1640]: time="2025-11-05T15:53:42.957234419Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:53:42.957904 containerd[1640]: time="2025-11-05T15:53:42.957269569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:53:42.957939 kubelet[2881]: E1105 15:53:42.957491 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:53:42.957939 kubelet[2881]: E1105 15:53:42.957533 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:53:42.959517 kubelet[2881]: E1105 15:53:42.957661 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fbf7bd05464430797d5c3831755c3d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:42.962384 containerd[1640]: time="2025-11-05T15:53:42.962364714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:53:43.391262 containerd[1640]: time="2025-11-05T15:53:43.391198404Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:43.392632 containerd[1640]: time="2025-11-05T15:53:43.392573084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:53:43.392751 containerd[1640]: time="2025-11-05T15:53:43.392655734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:43.393021 kubelet[2881]: E1105 15:53:43.392959 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:53:43.393021 kubelet[2881]: E1105 15:53:43.393018 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:53:43.393291 kubelet[2881]: E1105 15:53:43.393152 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:43.394526 kubelet[2881]: E1105 15:53:43.394437 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:53:43.503341 containerd[1640]: time="2025-11-05T15:53:43.502821359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:53:43.946028 containerd[1640]: time="2025-11-05T15:53:43.945927089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:43.947677 containerd[1640]: time="2025-11-05T15:53:43.947512228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:53:43.947677 containerd[1640]: time="2025-11-05T15:53:43.947646728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:43.947980 kubelet[2881]: E1105 15:53:43.947874 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:43.947980 kubelet[2881]: E1105 15:53:43.947928 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:53:43.948577 kubelet[2881]: E1105 15:53:43.948076 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5ql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lp6gv_calico-system(882e95b9-8bc5-46aa-95f2-d288e79d54ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:43.949874 kubelet[2881]: E1105 15:53:43.949350 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:44.504699 containerd[1640]: time="2025-11-05T15:53:44.504331638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:45.169206 containerd[1640]: time="2025-11-05T15:53:45.169122654Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:45.171663 containerd[1640]: time="2025-11-05T15:53:45.171572235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:45.171663 containerd[1640]: time="2025-11-05T15:53:45.171619455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:45.171965 kubelet[2881]: E1105 15:53:45.171894 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:45.172405 kubelet[2881]: E1105 15:53:45.171974 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:45.173101 containerd[1640]: time="2025-11-05T15:53:45.172740025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:53:45.174253 kubelet[2881]: E1105 15:53:45.173012 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8m87n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-lq899_calico-apiserver(ed0296fa-6511-43dd-8149-4b0a2a3f1b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:45.174709 kubelet[2881]: E1105 15:53:45.174532 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:45.632097 containerd[1640]: time="2025-11-05T15:53:45.631895436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:45.633802 containerd[1640]: time="2025-11-05T15:53:45.633746245Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:53:45.633986 containerd[1640]: time="2025-11-05T15:53:45.633877345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:53:45.634236 kubelet[2881]: E1105 15:53:45.634147 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:45.634297 kubelet[2881]: E1105 15:53:45.634252 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:53:45.634529 kubelet[2881]: E1105 15:53:45.634469 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cw5dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-2db4p_calico-apiserver(cac08146-1ce4-4e81-8229-9518d20f8fa6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:45.637099 kubelet[2881]: E1105 15:53:45.637028 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:53:48.504710 containerd[1640]: time="2025-11-05T15:53:48.504635407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:53:48.955568 containerd[1640]: time="2025-11-05T15:53:48.955468195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:53:48.957590 containerd[1640]: time="2025-11-05T15:53:48.957403028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:53:48.957590 containerd[1640]: time="2025-11-05T15:53:48.957541948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:53:48.958199 kubelet[2881]: E1105 15:53:48.958066 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:48.958199 kubelet[2881]: E1105 15:53:48.958145 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:53:48.959580 kubelet[2881]: E1105 15:53:48.959001 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78c8bcf4d4-sxk8m_calico-system(53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:53:48.961509 kubelet[2881]: E1105 15:53:48.960717 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:53:50.505282 kubelet[2881]: E1105 15:53:50.505193 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:53:56.312228 systemd[1]: Started sshd@10-46.62.132.115:22-139.178.68.195:54082.service - OpenSSH per-connection server daemon (139.178.68.195:54082). Nov 5 15:53:56.502945 kubelet[2881]: E1105 15:53:56.502641 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:53:57.401556 sshd[4978]: Accepted publickey for core from 139.178.68.195 port 54082 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:53:57.406443 sshd-session[4978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:53:57.416597 systemd-logind[1606]: New session 8 of user core. Nov 5 15:53:57.422323 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:53:57.505775 kubelet[2881]: E1105 15:53:57.505721 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:53:57.506944 kubelet[2881]: E1105 15:53:57.506544 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:53:57.872929 containerd[1640]: time="2025-11-05T15:53:57.872891741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" id:\"380153b81738a5f4fb560a34672faa152fad0e12b56bcad76ed1ef37bb36a2e4\" pid:4995 exited_at:{seconds:1762358037 nanos:872446679}" Nov 5 15:53:58.721957 sshd[4981]: Connection closed by 139.178.68.195 port 54082 Nov 5 15:53:58.723020 sshd-session[4978]: pam_unix(sshd:session): session closed for user core Nov 5 15:53:58.730309 systemd-logind[1606]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:53:58.732151 systemd[1]: sshd@10-46.62.132.115:22-139.178.68.195:54082.service: Deactivated successfully. Nov 5 15:53:58.737484 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:53:58.743594 systemd-logind[1606]: Removed session 8. Nov 5 15:53:59.505016 kubelet[2881]: E1105 15:53:59.504929 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:54:03.926757 systemd[1]: Started sshd@11-46.62.132.115:22-139.178.68.195:45526.service - OpenSSH per-connection server daemon (139.178.68.195:45526). Nov 5 15:54:04.502761 kubelet[2881]: E1105 15:54:04.502695 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:54:05.060838 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 45526 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:05.062852 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:05.070835 systemd-logind[1606]: New session 9 of user core. Nov 5 15:54:05.079598 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:54:05.510619 kubelet[2881]: E1105 15:54:05.510482 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:54:06.020398 sshd[5029]: Connection closed by 139.178.68.195 port 45526 Nov 5 15:54:06.020635 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:06.024158 systemd[1]: sshd@11-46.62.132.115:22-139.178.68.195:45526.service: Deactivated successfully. Nov 5 15:54:06.025491 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:54:06.026320 systemd-logind[1606]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:54:06.027060 systemd-logind[1606]: Removed session 9. Nov 5 15:54:06.176310 systemd[1]: Started sshd@12-46.62.132.115:22-139.178.68.195:45528.service - OpenSSH per-connection server daemon (139.178.68.195:45528). Nov 5 15:54:07.194617 sshd[5042]: Accepted publickey for core from 139.178.68.195 port 45528 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:07.196499 sshd-session[5042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:07.205018 systemd-logind[1606]: New session 10 of user core. Nov 5 15:54:07.213558 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:54:07.992176 sshd[5045]: Connection closed by 139.178.68.195 port 45528 Nov 5 15:54:07.993263 sshd-session[5042]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:07.999261 systemd-logind[1606]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:54:07.999625 systemd[1]: sshd@12-46.62.132.115:22-139.178.68.195:45528.service: Deactivated successfully. Nov 5 15:54:08.001009 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:54:08.002283 systemd-logind[1606]: Removed session 10. Nov 5 15:54:08.169674 systemd[1]: Started sshd@13-46.62.132.115:22-139.178.68.195:45536.service - OpenSSH per-connection server daemon (139.178.68.195:45536). Nov 5 15:54:08.502920 kubelet[2881]: E1105 15:54:08.502868 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:54:09.198391 sshd[5056]: Accepted publickey for core from 139.178.68.195 port 45536 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:09.202251 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:09.210306 systemd-logind[1606]: New session 11 of user core. Nov 5 15:54:09.216472 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:54:09.504902 kubelet[2881]: E1105 15:54:09.504732 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:54:09.942659 sshd[5064]: Connection closed by 139.178.68.195 port 45536 Nov 5 15:54:09.944191 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:09.952345 systemd[1]: sshd@13-46.62.132.115:22-139.178.68.195:45536.service: Deactivated successfully. Nov 5 15:54:09.956379 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:54:09.958389 systemd-logind[1606]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:54:09.961646 systemd-logind[1606]: Removed session 11. Nov 5 15:54:10.503473 kubelet[2881]: E1105 15:54:10.503275 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:54:14.502144 kubelet[2881]: E1105 15:54:14.502089 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:54:15.121872 systemd[1]: Started sshd@14-46.62.132.115:22-139.178.68.195:47682.service - OpenSSH per-connection server daemon (139.178.68.195:47682). Nov 5 15:54:16.161000 sshd[5077]: Accepted publickey for core from 139.178.68.195 port 47682 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:16.164285 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:16.173469 systemd-logind[1606]: New session 12 of user core. Nov 5 15:54:16.181589 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:54:16.505311 kubelet[2881]: E1105 15:54:16.504594 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:54:16.505311 kubelet[2881]: E1105 15:54:16.504870 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:54:16.930724 sshd[5080]: Connection closed by 139.178.68.195 port 47682 Nov 5 15:54:16.932336 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:16.936014 systemd[1]: sshd@14-46.62.132.115:22-139.178.68.195:47682.service: Deactivated successfully. Nov 5 15:54:16.936221 systemd-logind[1606]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:54:16.937662 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:54:16.938849 systemd-logind[1606]: Removed session 12. Nov 5 15:54:20.504366 kubelet[2881]: E1105 15:54:20.504255 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:54:22.142822 systemd[1]: Started sshd@15-46.62.132.115:22-139.178.68.195:47694.service - OpenSSH per-connection server daemon (139.178.68.195:47694). Nov 5 15:54:23.270204 sshd[5100]: Accepted publickey for core from 139.178.68.195 port 47694 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:23.272721 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:23.280391 systemd-logind[1606]: New session 13 of user core. Nov 5 15:54:23.286323 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:54:23.505531 containerd[1640]: time="2025-11-05T15:54:23.505364472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:54:24.016100 containerd[1640]: time="2025-11-05T15:54:24.016031907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:24.017329 containerd[1640]: time="2025-11-05T15:54:24.017279539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:54:24.017414 containerd[1640]: time="2025-11-05T15:54:24.017388940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:54:24.018625 kubelet[2881]: E1105 15:54:24.018556 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:54:24.018855 kubelet[2881]: E1105 15:54:24.018642 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:54:24.018855 kubelet[2881]: E1105 15:54:24.018794 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4fbf7bd05464430797d5c3831755c3d7,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:24.021408 containerd[1640]: time="2025-11-05T15:54:24.021379087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:54:24.150197 sshd[5105]: Connection closed by 139.178.68.195 port 47694 Nov 5 15:54:24.150956 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:24.159100 systemd-logind[1606]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:54:24.159545 systemd[1]: sshd@15-46.62.132.115:22-139.178.68.195:47694.service: Deactivated successfully. Nov 5 15:54:24.162735 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:54:24.165446 systemd-logind[1606]: Removed session 13. Nov 5 15:54:24.454054 containerd[1640]: time="2025-11-05T15:54:24.453876333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:24.455300 containerd[1640]: time="2025-11-05T15:54:24.455151625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:54:24.455300 containerd[1640]: time="2025-11-05T15:54:24.455270256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:54:24.456267 kubelet[2881]: E1105 15:54:24.455616 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:54:24.456267 kubelet[2881]: E1105 15:54:24.455676 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:54:24.456267 kubelet[2881]: E1105 15:54:24.455803 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s97jq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6847488456-ddlsx_calico-system(6c65c2f5-d1b1-431c-b576-9bb521d35cd6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:24.457407 kubelet[2881]: E1105 15:54:24.457359 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:54:24.502080 kubelet[2881]: E1105 15:54:24.501947 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:54:25.503176 kubelet[2881]: E1105 15:54:25.502312 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:54:27.811392 containerd[1640]: time="2025-11-05T15:54:27.811350165Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" id:\"4c667123b5722a5194763ead9a67e4fb635fa15b56b3633f0f7b4bce124dde84\" pid:5135 exited_at:{seconds:1762358067 nanos:810722668}" Nov 5 15:54:29.312539 systemd[1]: Started sshd@16-46.62.132.115:22-139.178.68.195:34182.service - OpenSSH per-connection server daemon (139.178.68.195:34182). Nov 5 15:54:30.359962 sshd[5148]: Accepted publickey for core from 139.178.68.195 port 34182 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:30.363098 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:30.370262 systemd-logind[1606]: New session 14 of user core. Nov 5 15:54:30.377247 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:54:30.502809 containerd[1640]: time="2025-11-05T15:54:30.502716748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:54:30.922902 containerd[1640]: time="2025-11-05T15:54:30.922850510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:30.924848 containerd[1640]: time="2025-11-05T15:54:30.924806249Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:54:30.924968 containerd[1640]: time="2025-11-05T15:54:30.924906740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:54:30.925102 kubelet[2881]: E1105 15:54:30.925040 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:54:30.925537 kubelet[2881]: E1105 15:54:30.925098 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:54:30.926292 kubelet[2881]: E1105 15:54:30.926249 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:30.930365 containerd[1640]: time="2025-11-05T15:54:30.930324726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:54:31.193886 sshd[5151]: Connection closed by 139.178.68.195 port 34182 Nov 5 15:54:31.194554 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:31.199206 systemd-logind[1606]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:54:31.199617 systemd[1]: sshd@16-46.62.132.115:22-139.178.68.195:34182.service: Deactivated successfully. Nov 5 15:54:31.200825 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:54:31.202845 systemd-logind[1606]: Removed session 14. Nov 5 15:54:31.379104 containerd[1640]: time="2025-11-05T15:54:31.378448916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:31.379818 containerd[1640]: time="2025-11-05T15:54:31.379738480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:54:31.379818 containerd[1640]: time="2025-11-05T15:54:31.379823891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:54:31.380236 kubelet[2881]: E1105 15:54:31.379967 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:54:31.380236 kubelet[2881]: E1105 15:54:31.380019 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:54:31.380236 kubelet[2881]: E1105 15:54:31.380144 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59hs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-zmgb9_calico-system(b030d0a4-2581-4144-a827-7d0dc3133cf3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:31.381516 kubelet[2881]: E1105 15:54:31.381449 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:54:31.401365 systemd[1]: Started sshd@17-46.62.132.115:22-139.178.68.195:34186.service - OpenSSH per-connection server daemon (139.178.68.195:34186). Nov 5 15:54:31.505733 containerd[1640]: time="2025-11-05T15:54:31.504200679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:54:31.942607 containerd[1640]: time="2025-11-05T15:54:31.942533167Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:31.944281 containerd[1640]: time="2025-11-05T15:54:31.944107153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:54:31.944281 containerd[1640]: time="2025-11-05T15:54:31.944181144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:54:31.944651 kubelet[2881]: E1105 15:54:31.944527 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:54:31.944651 kubelet[2881]: E1105 15:54:31.944598 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:54:31.945367 kubelet[2881]: E1105 15:54:31.944814 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fxmt4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78c8bcf4d4-sxk8m_calico-system(53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:31.946145 kubelet[2881]: E1105 15:54:31.946093 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:54:32.502262 containerd[1640]: time="2025-11-05T15:54:32.501977956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:54:32.538024 sshd[5163]: Accepted publickey for core from 139.178.68.195 port 34186 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:32.542983 sshd-session[5163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:32.552401 systemd-logind[1606]: New session 15 of user core. Nov 5 15:54:32.555372 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:54:33.081031 containerd[1640]: time="2025-11-05T15:54:33.080965580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:33.082782 containerd[1640]: time="2025-11-05T15:54:33.082721438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:54:33.082782 containerd[1640]: time="2025-11-05T15:54:33.082821909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:33.083078 kubelet[2881]: E1105 15:54:33.083014 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:54:33.083078 kubelet[2881]: E1105 15:54:33.083060 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:54:33.085047 kubelet[2881]: E1105 15:54:33.084917 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l5ql9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-lp6gv_calico-system(882e95b9-8bc5-46aa-95f2-d288e79d54ed): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:33.086350 kubelet[2881]: E1105 15:54:33.086281 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:54:33.589627 sshd[5166]: Connection closed by 139.178.68.195 port 34186 Nov 5 15:54:33.591332 sshd-session[5163]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:33.607966 systemd-logind[1606]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:54:33.608974 systemd[1]: sshd@17-46.62.132.115:22-139.178.68.195:34186.service: Deactivated successfully. Nov 5 15:54:33.613978 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:54:33.619495 systemd-logind[1606]: Removed session 15. Nov 5 15:54:33.746081 systemd[1]: Started sshd@18-46.62.132.115:22-139.178.68.195:57958.service - OpenSSH per-connection server daemon (139.178.68.195:57958). Nov 5 15:54:34.778136 sshd[5176]: Accepted publickey for core from 139.178.68.195 port 57958 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:34.780051 sshd-session[5176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:34.788065 systemd-logind[1606]: New session 16 of user core. Nov 5 15:54:34.790371 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:54:36.189147 sshd[5179]: Connection closed by 139.178.68.195 port 57958 Nov 5 15:54:36.194975 sshd-session[5176]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:36.202216 systemd-logind[1606]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:54:36.202541 systemd[1]: sshd@18-46.62.132.115:22-139.178.68.195:57958.service: Deactivated successfully. Nov 5 15:54:36.209295 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:54:36.215236 systemd-logind[1606]: Removed session 16. Nov 5 15:54:36.398445 systemd[1]: Started sshd@19-46.62.132.115:22-139.178.68.195:57962.service - OpenSSH per-connection server daemon (139.178.68.195:57962). Nov 5 15:54:37.535868 sshd[5196]: Accepted publickey for core from 139.178.68.195 port 57962 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:37.536944 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:37.541189 systemd-logind[1606]: New session 17 of user core. Nov 5 15:54:37.552391 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:54:38.504118 kubelet[2881]: E1105 15:54:38.504033 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:54:38.620427 sshd[5220]: Connection closed by 139.178.68.195 port 57962 Nov 5 15:54:38.621995 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:38.625581 systemd-logind[1606]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:54:38.626916 systemd[1]: sshd@19-46.62.132.115:22-139.178.68.195:57962.service: Deactivated successfully. Nov 5 15:54:38.630065 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:54:38.633389 systemd-logind[1606]: Removed session 17. Nov 5 15:54:38.773199 systemd[1]: Started sshd@20-46.62.132.115:22-139.178.68.195:57970.service - OpenSSH per-connection server daemon (139.178.68.195:57970). Nov 5 15:54:39.505600 containerd[1640]: time="2025-11-05T15:54:39.505525424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:54:39.790930 sshd[5230]: Accepted publickey for core from 139.178.68.195 port 57970 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:39.792414 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:39.796929 systemd-logind[1606]: New session 18 of user core. Nov 5 15:54:39.801045 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:54:39.946527 containerd[1640]: time="2025-11-05T15:54:39.946460336Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:39.947738 containerd[1640]: time="2025-11-05T15:54:39.947662368Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:54:39.947978 containerd[1640]: time="2025-11-05T15:54:39.947851120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:39.948867 kubelet[2881]: E1105 15:54:39.948192 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:39.948867 kubelet[2881]: E1105 15:54:39.948255 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:39.948867 kubelet[2881]: E1105 15:54:39.948489 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8m87n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-lq899_calico-apiserver(ed0296fa-6511-43dd-8149-4b0a2a3f1b39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:39.949873 containerd[1640]: time="2025-11-05T15:54:39.949463258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:54:39.950037 kubelet[2881]: E1105 15:54:39.949977 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:54:40.382506 containerd[1640]: time="2025-11-05T15:54:40.382293989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:54:40.383786 containerd[1640]: time="2025-11-05T15:54:40.383689475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:54:40.383868 containerd[1640]: time="2025-11-05T15:54:40.383783346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:54:40.386191 kubelet[2881]: E1105 15:54:40.384054 2881 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:40.386382 kubelet[2881]: E1105 15:54:40.386321 2881 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:54:40.386861 kubelet[2881]: E1105 15:54:40.386713 2881 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cw5dx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6c6845c454-2db4p_calico-apiserver(cac08146-1ce4-4e81-8229-9518d20f8fa6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:54:40.388054 kubelet[2881]: E1105 15:54:40.388021 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:54:40.656980 sshd[5233]: Connection closed by 139.178.68.195 port 57970 Nov 5 15:54:40.657661 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:40.662268 systemd-logind[1606]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:54:40.663058 systemd[1]: sshd@20-46.62.132.115:22-139.178.68.195:57970.service: Deactivated successfully. Nov 5 15:54:40.665346 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:54:40.667233 systemd-logind[1606]: Removed session 18. Nov 5 15:54:43.503196 kubelet[2881]: E1105 15:54:43.503146 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:54:45.502224 kubelet[2881]: E1105 15:54:45.501898 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:54:45.868256 systemd[1]: Started sshd@21-46.62.132.115:22-139.178.68.195:57696.service - OpenSSH per-connection server daemon (139.178.68.195:57696). Nov 5 15:54:47.005405 sshd[5247]: Accepted publickey for core from 139.178.68.195 port 57696 ssh2: RSA SHA256:gL027GbiXzBJ1aeIXERAhMmp2CCUMKp/y61+ZAM1VlY Nov 5 15:54:47.007568 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:47.013745 systemd-logind[1606]: New session 19 of user core. Nov 5 15:54:47.018351 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:54:47.851554 sshd[5250]: Connection closed by 139.178.68.195 port 57696 Nov 5 15:54:47.852106 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:47.859478 systemd-logind[1606]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:54:47.860569 systemd[1]: sshd@21-46.62.132.115:22-139.178.68.195:57696.service: Deactivated successfully. Nov 5 15:54:47.863135 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:54:47.865223 systemd-logind[1606]: Removed session 19. Nov 5 15:54:48.502673 kubelet[2881]: E1105 15:54:48.502628 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:54:49.505924 kubelet[2881]: E1105 15:54:49.505071 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:54:53.502800 kubelet[2881]: E1105 15:54:53.501983 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:54:55.502446 kubelet[2881]: E1105 15:54:55.502405 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:54:57.878200 containerd[1640]: time="2025-11-05T15:54:57.878033227Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af95c75ee924f81471a027e00d178360efb8267cd42b897c81f5755922a659fc\" id:\"760c6aba60b0206e7a3184582a2dea182c5ad6a4758e4f4ea012acca19723f75\" pid:5276 exited_at:{seconds:1762358097 nanos:877512981}" Nov 5 15:54:58.502363 kubelet[2881]: E1105 15:54:58.502245 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78c8bcf4d4-sxk8m" podUID="53b3cb5e-bf4b-46ee-9b8d-ae19375a2db6" Nov 5 15:54:58.504090 kubelet[2881]: E1105 15:54:58.503938 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3" Nov 5 15:55:01.502237 kubelet[2881]: E1105 15:55:01.502189 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-lp6gv" podUID="882e95b9-8bc5-46aa-95f2-d288e79d54ed" Nov 5 15:55:01.503009 kubelet[2881]: E1105 15:55:01.502707 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6847488456-ddlsx" podUID="6c65c2f5-d1b1-431c-b576-9bb521d35cd6" Nov 5 15:55:02.921557 kubelet[2881]: E1105 15:55:02.921444 2881 controller.go:195] "Failed to update lease" err="Put \"https://46.62.132.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4487-0-1-1-e8a5680daa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 5 15:55:02.984206 systemd[1]: cri-containerd-115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25.scope: Deactivated successfully. Nov 5 15:55:02.984422 systemd[1]: cri-containerd-115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25.scope: Consumed 18.883s CPU time, 134.6M memory peak, 39.2M read from disk. Nov 5 15:55:02.993835 containerd[1640]: time="2025-11-05T15:55:02.993586929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\" id:\"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\" pid:3209 exit_status:1 exited_at:{seconds:1762358102 nanos:992228762}" Nov 5 15:55:02.993835 containerd[1640]: time="2025-11-05T15:55:02.993688460Z" level=info msg="received exit event container_id:\"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\" id:\"115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25\" pid:3209 exit_status:1 exited_at:{seconds:1762358102 nanos:992228762}" Nov 5 15:55:03.050027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25-rootfs.mount: Deactivated successfully. Nov 5 15:55:03.181330 kubelet[2881]: I1105 15:55:03.180929 2881 scope.go:117] "RemoveContainer" containerID="115b272a791f5f312b4f4696eb3ad3e387c194306b7b2201b15d09d1c4ad4e25" Nov 5 15:55:03.231857 containerd[1640]: time="2025-11-05T15:55:03.231717702Z" level=info msg="CreateContainer within sandbox \"98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 5 15:55:03.254813 kubelet[2881]: E1105 15:55:03.254696 2881 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33508->10.0.0.2:2379: read: connection timed out" Nov 5 15:55:03.403266 containerd[1640]: time="2025-11-05T15:55:03.402913445Z" level=info msg="Container 464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:03.445418 containerd[1640]: time="2025-11-05T15:55:03.445195197Z" level=info msg="CreateContainer within sandbox \"98c69d1b78b22ea30cbf6adaf28dbd13d99ec5d1960a77c94af04355522b9815\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89\"" Nov 5 15:55:03.446408 containerd[1640]: time="2025-11-05T15:55:03.446347921Z" level=info msg="StartContainer for \"464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89\"" Nov 5 15:55:03.463227 containerd[1640]: time="2025-11-05T15:55:03.462090382Z" level=info msg="connecting to shim 464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89" address="unix:///run/containerd/s/57a835a54775c32ed12bf97f13bc62978918a4943649c9f237c276734d476b53" protocol=ttrpc version=3 Nov 5 15:55:03.503378 systemd[1]: Started cri-containerd-464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89.scope - libcontainer container 464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89. Nov 5 15:55:03.567736 containerd[1640]: time="2025-11-05T15:55:03.567640140Z" level=info msg="StartContainer for \"464db71ebd90c53e9e6bf11e8567152600413487ece5db002acfef6a8e14bf89\" returns successfully" Nov 5 15:55:03.598847 systemd[1]: cri-containerd-2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1.scope: Deactivated successfully. Nov 5 15:55:03.599781 systemd[1]: cri-containerd-2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1.scope: Consumed 3.010s CPU time, 81.8M memory peak, 55M read from disk. Nov 5 15:55:03.605583 containerd[1640]: time="2025-11-05T15:55:03.605530958Z" level=info msg="received exit event container_id:\"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\" id:\"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\" pid:2718 exit_status:1 exited_at:{seconds:1762358103 nanos:604160242}" Nov 5 15:55:03.606737 containerd[1640]: time="2025-11-05T15:55:03.606701343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\" id:\"2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1\" pid:2718 exit_status:1 exited_at:{seconds:1762358103 nanos:604160242}" Nov 5 15:55:03.629731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1-rootfs.mount: Deactivated successfully. Nov 5 15:55:03.973818 kubelet[2881]: I1105 15:55:03.973753 2881 status_manager.go:895] "Failed to get status for pod" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33436->10.0.0.2:2379: read: connection timed out" Nov 5 15:55:03.989731 kubelet[2881]: E1105 15:55:03.972936 2881 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:33298->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-6c6845c454-2db4p.1875273f7bba22bc calico-apiserver 1443 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-6c6845c454-2db4p,UID:cac08146-1ce4-4e81-8229-9518d20f8fa6,APIVersion:v1,ResourceVersion:800,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4487-0-1-1-e8a5680daa,},FirstTimestamp:2025-11-05 15:53:02 +0000 UTC,LastTimestamp:2025-11-05 15:54:53.501793179 +0000 UTC m=+154.152004870,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4487-0-1-1-e8a5680daa,}" Nov 5 15:55:04.179894 kubelet[2881]: I1105 15:55:04.179831 2881 scope.go:117] "RemoveContainer" containerID="2827067238d6521198b20ee2829f023e853882316e24393c3dfd035bf581e7b1" Nov 5 15:55:04.182265 containerd[1640]: time="2025-11-05T15:55:04.182144718Z" level=info msg="CreateContainer within sandbox \"a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 5 15:55:04.200114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311546295.mount: Deactivated successfully. Nov 5 15:55:04.202411 containerd[1640]: time="2025-11-05T15:55:04.200969605Z" level=info msg="Container 6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:04.221130 containerd[1640]: time="2025-11-05T15:55:04.221009359Z" level=info msg="CreateContainer within sandbox \"a63698998df14bd8f28fd98739a0a2fa401853da5b3587959689140dc10f318d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be\"" Nov 5 15:55:04.222322 containerd[1640]: time="2025-11-05T15:55:04.222247374Z" level=info msg="StartContainer for \"6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be\"" Nov 5 15:55:04.224426 containerd[1640]: time="2025-11-05T15:55:04.224139857Z" level=info msg="connecting to shim 6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be" address="unix:///run/containerd/s/7436c349380b6cb1590543613e49ffa4ad12531386fcb717cf4903382541bdd8" protocol=ttrpc version=3 Nov 5 15:55:04.254440 systemd[1]: Started cri-containerd-6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be.scope - libcontainer container 6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be. Nov 5 15:55:04.315038 containerd[1640]: time="2025-11-05T15:55:04.314986141Z" level=info msg="StartContainer for \"6082e9f368b071dd056da5fa1d4fd3aa590e0892abfb7c6b6b5fd894d7a9f4be\" returns successfully" Nov 5 15:55:04.502210 kubelet[2881]: E1105 15:55:04.502021 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-2db4p" podUID="cac08146-1ce4-4e81-8229-9518d20f8fa6" Nov 5 15:55:06.502938 kubelet[2881]: E1105 15:55:06.502754 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6c6845c454-lq899" podUID="ed0296fa-6511-43dd-8149-4b0a2a3f1b39" Nov 5 15:55:08.418833 systemd[1]: cri-containerd-35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954.scope: Deactivated successfully. Nov 5 15:55:08.419136 systemd[1]: cri-containerd-35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954.scope: Consumed 2.103s CPU time, 39.8M memory peak, 37M read from disk. Nov 5 15:55:08.422296 containerd[1640]: time="2025-11-05T15:55:08.422246714Z" level=info msg="received exit event container_id:\"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\" id:\"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\" pid:2730 exit_status:1 exited_at:{seconds:1762358108 nanos:421992281}" Nov 5 15:55:08.423135 containerd[1640]: time="2025-11-05T15:55:08.423093955Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\" id:\"35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954\" pid:2730 exit_status:1 exited_at:{seconds:1762358108 nanos:421992281}" Nov 5 15:55:08.443864 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954-rootfs.mount: Deactivated successfully. Nov 5 15:55:09.204896 kubelet[2881]: I1105 15:55:09.204854 2881 scope.go:117] "RemoveContainer" containerID="35414f5da3a9c81fc0723a6dfbb31e135a6cc703d069318500b4f43a266f4954" Nov 5 15:55:09.206816 containerd[1640]: time="2025-11-05T15:55:09.206772345Z" level=info msg="CreateContainer within sandbox \"7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 5 15:55:09.217006 containerd[1640]: time="2025-11-05T15:55:09.216957060Z" level=info msg="Container 79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:09.222737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454517372.mount: Deactivated successfully. Nov 5 15:55:09.228197 containerd[1640]: time="2025-11-05T15:55:09.228133978Z" level=info msg="CreateContainer within sandbox \"7356c98217cfb1ddcbbe2594655daf764527d5a96fa0186088e67aa76dd535cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6\"" Nov 5 15:55:09.228738 containerd[1640]: time="2025-11-05T15:55:09.228681434Z" level=info msg="StartContainer for \"79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6\"" Nov 5 15:55:09.229853 containerd[1640]: time="2025-11-05T15:55:09.229795688Z" level=info msg="connecting to shim 79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6" address="unix:///run/containerd/s/e63a1cfb7755e2150580dd85a3a07c16c3cccdd9ee379b4d9ef5acdc7f94fdd9" protocol=ttrpc version=3 Nov 5 15:55:09.253379 systemd[1]: Started cri-containerd-79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6.scope - libcontainer container 79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6. Nov 5 15:55:09.307937 containerd[1640]: time="2025-11-05T15:55:09.307907279Z" level=info msg="StartContainer for \"79a347ab668299c16ea11ff696758b6fbb8163390bf2c86b4b97634767d1cdb6\" returns successfully" Nov 5 15:55:09.504097 kubelet[2881]: E1105 15:55:09.503108 2881 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zmgb9" podUID="b030d0a4-2581-4144-a827-7d0dc3133cf3"