Nov 1 00:36:36.410027 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:16:48 -00 2025 Nov 1 00:36:36.410079 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06aebf6c20a38bc11b85661c7362dc459d93d17de8abe6e1c0606dc6af554184 Nov 1 00:36:36.410095 kernel: BIOS-provided physical RAM map: Nov 1 00:36:36.411889 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 00:36:36.411910 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 00:36:36.411922 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 00:36:36.411935 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 1 00:36:36.411946 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 1 00:36:36.411958 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 00:36:36.411969 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 00:36:36.411980 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:36:36.411991 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 00:36:36.412007 kernel: NX (Execute Disable) protection: active Nov 1 00:36:36.412019 kernel: APIC: Static calls initialized Nov 1 00:36:36.412032 kernel: SMBIOS 2.8 present. Nov 1 00:36:36.412045 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 1 00:36:36.412062 kernel: DMI: Memory slots populated: 1/1 Nov 1 00:36:36.412075 kernel: Hypervisor detected: KVM Nov 1 00:36:36.412087 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 00:36:36.412099 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:36:36.412111 kernel: kvm-clock: using sched offset of 4939295811 cycles Nov 1 00:36:36.412125 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:36:36.412138 kernel: tsc: Detected 2499.998 MHz processor Nov 1 00:36:36.412150 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:36:36.412164 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:36:36.412181 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 00:36:36.412194 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 00:36:36.412206 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:36:36.412219 kernel: Using GB pages for direct mapping Nov 1 00:36:36.412231 kernel: ACPI: Early table checksum verification disabled Nov 1 00:36:36.412244 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 1 00:36:36.412257 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412274 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412287 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412299 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 1 00:36:36.412312 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412324 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412337 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412350 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:36:36.412367 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 1 00:36:36.412386 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 1 00:36:36.412399 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 1 00:36:36.412412 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 1 00:36:36.412425 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 1 00:36:36.412442 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 1 00:36:36.412456 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 1 00:36:36.412468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 00:36:36.412482 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 00:36:36.412495 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 1 00:36:36.412508 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Nov 1 00:36:36.412521 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Nov 1 00:36:36.412538 kernel: Zone ranges: Nov 1 00:36:36.412551 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:36:36.412565 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 1 00:36:36.412578 kernel: Normal empty Nov 1 00:36:36.412591 kernel: Device empty Nov 1 00:36:36.412604 kernel: Movable zone start for each node Nov 1 00:36:36.412617 kernel: Early memory node ranges Nov 1 00:36:36.412635 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 00:36:36.412648 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 1 00:36:36.412661 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 1 00:36:36.412674 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:36:36.412701 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 00:36:36.412714 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 1 00:36:36.412728 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:36:36.412741 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:36:36.412760 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:36:36.412773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:36:36.412786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:36:36.412799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:36:36.412831 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:36:36.412845 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:36:36.412858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:36:36.412877 kernel: TSC deadline timer available Nov 1 00:36:36.412890 kernel: CPU topo: Max. logical packages: 16 Nov 1 00:36:36.412903 kernel: CPU topo: Max. logical dies: 16 Nov 1 00:36:36.412917 kernel: CPU topo: Max. dies per package: 1 Nov 1 00:36:36.412930 kernel: CPU topo: Max. threads per core: 1 Nov 1 00:36:36.412943 kernel: CPU topo: Num. cores per package: 1 Nov 1 00:36:36.412956 kernel: CPU topo: Num. threads per package: 1 Nov 1 00:36:36.412973 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Nov 1 00:36:36.412986 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:36:36.412999 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 00:36:36.413013 kernel: Booting paravirtualized kernel on KVM Nov 1 00:36:36.413026 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:36:36.413039 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 00:36:36.413052 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 1 00:36:36.413066 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 1 00:36:36.413083 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 00:36:36.413096 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:36:36.413110 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:36:36.413124 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06aebf6c20a38bc11b85661c7362dc459d93d17de8abe6e1c0606dc6af554184 Nov 1 00:36:36.413138 kernel: random: crng init done Nov 1 00:36:36.413151 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:36:36.413169 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:36:36.413183 kernel: Fallback order for Node 0: 0 Nov 1 00:36:36.413196 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Nov 1 00:36:36.413209 kernel: Policy zone: DMA32 Nov 1 00:36:36.413222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:36:36.413235 kernel: software IO TLB: area num 16. Nov 1 00:36:36.413249 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 00:36:36.413262 kernel: Kernel/User page tables isolation: enabled Nov 1 00:36:36.413279 kernel: ftrace: allocating 40092 entries in 157 pages Nov 1 00:36:36.413293 kernel: ftrace: allocated 157 pages with 5 groups Nov 1 00:36:36.413306 kernel: Dynamic Preempt: voluntary Nov 1 00:36:36.413319 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:36:36.413340 kernel: rcu: RCU event tracing is enabled. Nov 1 00:36:36.413356 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 00:36:36.413369 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:36:36.413388 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:36:36.413401 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:36:36.413414 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:36:36.413428 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 00:36:36.413441 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:36:36.413454 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:36:36.413468 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 00:36:36.413485 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 1 00:36:36.413499 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:36:36.413523 kernel: Console: colour VGA+ 80x25 Nov 1 00:36:36.413541 kernel: printk: legacy console [tty0] enabled Nov 1 00:36:36.413554 kernel: printk: legacy console [ttyS0] enabled Nov 1 00:36:36.413568 kernel: ACPI: Core revision 20240827 Nov 1 00:36:36.413581 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:36:36.413595 kernel: x2apic enabled Nov 1 00:36:36.413608 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:36:36.413623 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 1 00:36:36.413641 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 1 00:36:36.413654 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:36:36.413668 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 00:36:36.413698 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 00:36:36.413712 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:36:36.413725 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:36:36.413739 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:36:36.413752 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 00:36:36.413765 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:36:36.413779 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:36:36.413792 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 00:36:36.415828 kernel: MMIO Stale Data: Unknown: No mitigations Nov 1 00:36:36.415852 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 1 00:36:36.415866 kernel: active return thunk: its_return_thunk Nov 1 00:36:36.415887 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:36:36.415901 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:36:36.415915 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:36:36.415928 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:36:36.415942 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:36:36.415955 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 00:36:36.415969 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:36:36.415982 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:36:36.415996 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 1 00:36:36.416009 kernel: landlock: Up and running. Nov 1 00:36:36.416045 kernel: SELinux: Initializing. Nov 1 00:36:36.416060 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:36:36.416074 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 00:36:36.416087 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 1 00:36:36.416101 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 1 00:36:36.416115 kernel: signal: max sigframe size: 1776 Nov 1 00:36:36.416129 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:36:36.416143 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:36:36.416158 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Nov 1 00:36:36.416182 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:36:36.416196 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:36:36.416210 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:36:36.416223 kernel: .... node #0, CPUs: #1 Nov 1 00:36:36.416237 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:36:36.416251 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 1 00:36:36.416265 kernel: Memory: 1918212K/2096616K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 172392K reserved, 0K cma-reserved) Nov 1 00:36:36.416284 kernel: devtmpfs: initialized Nov 1 00:36:36.416297 kernel: x86/mm: Memory block size: 128MB Nov 1 00:36:36.416311 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:36:36.416325 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 00:36:36.416339 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:36:36.416353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:36:36.416366 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:36:36.416385 kernel: audit: type=2000 audit(1761957393.190:1): state=initialized audit_enabled=0 res=1 Nov 1 00:36:36.416399 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:36:36.416412 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:36:36.416426 kernel: cpuidle: using governor menu Nov 1 00:36:36.416440 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:36:36.416454 kernel: dca service started, version 1.12.1 Nov 1 00:36:36.416467 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 1 00:36:36.416486 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 00:36:36.416499 kernel: PCI: Using configuration type 1 for base access Nov 1 00:36:36.416513 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:36:36.416527 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:36:36.416541 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:36:36.416555 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:36:36.416568 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:36:36.416586 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:36:36.416600 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:36:36.416614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:36:36.416627 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:36:36.416641 kernel: ACPI: Interpreter enabled Nov 1 00:36:36.416655 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:36:36.416668 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:36:36.416700 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:36:36.416715 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:36:36.416729 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:36:36.416742 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:36:36.418078 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:36:36.420941 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 00:36:36.421190 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 00:36:36.421213 kernel: PCI host bridge to bus 0000:00 Nov 1 00:36:36.421474 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:36:36.421692 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:36:36.421918 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:36:36.422122 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 00:36:36.422331 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 00:36:36.422534 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 1 00:36:36.422751 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:36:36.427052 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 1 00:36:36.427327 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Nov 1 00:36:36.427561 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Nov 1 00:36:36.427796 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Nov 1 00:36:36.428048 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Nov 1 00:36:36.428267 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:36:36.428529 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.428763 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Nov 1 00:36:36.429021 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 00:36:36.429240 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 00:36:36.429459 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:36:36.429709 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.430275 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Nov 1 00:36:36.430506 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 00:36:36.431786 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 00:36:36.434819 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:36:36.435111 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.435340 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Nov 1 00:36:36.435562 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 00:36:36.435873 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 00:36:36.436108 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:36:36.436354 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.436572 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Nov 1 00:36:36.436818 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 00:36:36.437045 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 00:36:36.437272 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:36:36.437516 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.437748 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Nov 1 00:36:36.440025 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 00:36:36.440256 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 00:36:36.440479 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:36:36.440747 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.440994 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Nov 1 00:36:36.441213 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 00:36:36.441432 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 00:36:36.441661 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:36:36.442923 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.443190 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Nov 1 00:36:36.443425 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 00:36:36.443653 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 00:36:36.446023 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:36:36.446278 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 1 00:36:36.446518 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Nov 1 00:36:36.446759 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 00:36:36.447762 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 00:36:36.449766 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:36:36.450053 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 1 00:36:36.451283 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Nov 1 00:36:36.451533 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Nov 1 00:36:36.451777 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 00:36:36.453177 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Nov 1 00:36:36.455888 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 1 00:36:36.456164 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Nov 1 00:36:36.456762 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Nov 1 00:36:36.457022 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Nov 1 00:36:36.457265 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 1 00:36:36.457491 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:36:36.457758 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 1 00:36:36.459704 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Nov 1 00:36:36.459964 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Nov 1 00:36:36.460220 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 1 00:36:36.460445 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 1 00:36:36.460723 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Nov 1 00:36:36.460980 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Nov 1 00:36:36.461207 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 00:36:36.461442 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 00:36:36.461665 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 00:36:36.462315 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 00:36:36.462583 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Nov 1 00:36:36.466066 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Nov 1 00:36:36.466302 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 00:36:36.466565 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 1 00:36:36.467458 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Nov 1 00:36:36.467722 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 00:36:36.468067 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 1 00:36:36.468300 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 00:36:36.468534 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 00:36:36.468773 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 00:36:36.469015 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 00:36:36.469240 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 00:36:36.469462 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 00:36:36.469723 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 00:36:36.469753 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:36:36.469768 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:36:36.469783 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:36:36.469797 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:36:36.471875 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:36:36.471909 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:36:36.471933 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:36:36.471967 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:36:36.471991 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:36:36.472017 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:36:36.472040 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:36:36.472064 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:36:36.472087 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:36:36.472112 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:36:36.472143 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:36:36.472167 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:36:36.472192 kernel: iommu: Default domain type: Translated Nov 1 00:36:36.472216 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:36:36.472240 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:36:36.472263 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:36:36.472286 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 00:36:36.472317 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 1 00:36:36.472572 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:36:36.472851 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:36:36.473075 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:36:36.473096 kernel: vgaarb: loaded Nov 1 00:36:36.473111 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:36:36.473125 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:36:36.473147 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:36:36.473161 kernel: pnp: PnP ACPI init Nov 1 00:36:36.473402 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 00:36:36.473425 kernel: pnp: PnP ACPI: found 5 devices Nov 1 00:36:36.473440 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:36:36.473454 kernel: NET: Registered PF_INET protocol family Nov 1 00:36:36.473475 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:36:36.473490 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 00:36:36.473504 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:36:36.473518 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:36:36.473545 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 00:36:36.473558 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 00:36:36.473572 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:36:36.473603 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 00:36:36.473618 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:36:36.473632 kernel: NET: Registered PF_XDP protocol family Nov 1 00:36:36.474881 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 1 00:36:36.475120 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 1 00:36:36.475345 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 1 00:36:36.475567 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 1 00:36:36.475832 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 00:36:36.476057 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 00:36:36.476277 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 00:36:36.476496 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 00:36:36.476729 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 1 00:36:36.479040 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 1 00:36:36.479281 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 1 00:36:36.479505 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 1 00:36:36.479763 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 1 00:36:36.480014 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 1 00:36:36.480234 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 1 00:36:36.480452 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 1 00:36:36.480713 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 00:36:36.484689 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 00:36:36.484950 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 00:36:36.485197 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 1 00:36:36.485448 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 00:36:36.485670 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:36:36.485950 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 00:36:36.486974 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 1 00:36:36.487227 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 00:36:36.487457 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:36:36.487695 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 00:36:36.487957 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 1 00:36:36.488181 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 00:36:36.488409 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:36:36.488629 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 00:36:36.488962 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 1 00:36:36.489187 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 00:36:36.489406 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:36:36.489632 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 00:36:36.489886 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 1 00:36:36.490107 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 00:36:36.490330 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:36:36.490549 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 00:36:36.490779 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 1 00:36:36.491040 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 00:36:36.491268 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:36:36.491487 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 00:36:36.491718 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 1 00:36:36.491958 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 00:36:36.492177 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:36:36.492429 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 00:36:36.492655 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 1 00:36:36.492916 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 00:36:36.493136 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:36:36.493357 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:36:36.493572 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:36:36.493787 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:36:36.494009 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 00:36:36.494220 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 00:36:36.494422 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 1 00:36:36.494663 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 1 00:36:36.494923 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 1 00:36:36.495134 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 00:36:36.495362 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 00:36:36.495616 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 1 00:36:36.495857 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 00:36:36.496068 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 00:36:36.496284 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 1 00:36:36.496492 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 00:36:36.496720 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 00:36:36.496981 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 00:36:36.497201 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 00:36:36.497407 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 00:36:36.497630 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 1 00:36:36.497869 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 00:36:36.498087 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 00:36:36.498311 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 1 00:36:36.498521 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 00:36:36.498740 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 00:36:36.499001 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 1 00:36:36.499218 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 00:36:36.499424 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 00:36:36.499642 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 1 00:36:36.499891 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 00:36:36.500101 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 00:36:36.500124 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:36:36.500147 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:36:36.500162 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:36:36.500176 kernel: software IO TLB: mapped [mem 0x0000000074000000-0x0000000078000000] (64MB) Nov 1 00:36:36.500191 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:36:36.500206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 1 00:36:36.500221 kernel: Initialise system trusted keyrings Nov 1 00:36:36.500236 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 00:36:36.500257 kernel: Key type asymmetric registered Nov 1 00:36:36.500271 kernel: Asymmetric key parser 'x509' registered Nov 1 00:36:36.500285 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 1 00:36:36.500300 kernel: io scheduler mq-deadline registered Nov 1 00:36:36.500314 kernel: io scheduler kyber registered Nov 1 00:36:36.500328 kernel: io scheduler bfq registered Nov 1 00:36:36.500558 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 00:36:36.500835 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 00:36:36.501061 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.501280 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 00:36:36.501498 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 00:36:36.501753 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.501999 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 00:36:36.502222 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 00:36:36.502441 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.502659 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 00:36:36.502916 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 00:36:36.503144 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.503361 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 00:36:36.503579 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 00:36:36.503836 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.504059 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 00:36:36.504277 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 00:36:36.504504 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.504735 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 00:36:36.504976 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 00:36:36.505193 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.505410 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 00:36:36.505643 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 00:36:36.505904 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 00:36:36.505927 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:36:36.505943 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:36:36.505958 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:36:36.505980 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:36:36.505995 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:36:36.506010 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:36:36.506024 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:36:36.506039 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:36:36.506268 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 00:36:36.506290 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 00:36:36.506505 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 00:36:36.506730 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T00:36:34 UTC (1761957394) Nov 1 00:36:36.506965 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:36:36.506987 kernel: intel_pstate: CPU model not supported Nov 1 00:36:36.507002 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:36:36.507017 kernel: Segment Routing with IPv6 Nov 1 00:36:36.507032 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:36:36.507054 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:36:36.507068 kernel: Key type dns_resolver registered Nov 1 00:36:36.507083 kernel: IPI shorthand broadcast: enabled Nov 1 00:36:36.507098 kernel: sched_clock: Marking stable (1646004135, 230077170)->(2005230680, -129149375) Nov 1 00:36:36.507112 kernel: registered taskstats version 1 Nov 1 00:36:36.507126 kernel: Loading compiled-in X.509 certificates Nov 1 00:36:36.507141 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 82c585ed20587b8c5c20a8f7d03f29967775c2e4' Nov 1 00:36:36.507160 kernel: Demotion targets for Node 0: null Nov 1 00:36:36.507175 kernel: Key type .fscrypt registered Nov 1 00:36:36.507189 kernel: Key type fscrypt-provisioning registered Nov 1 00:36:36.507203 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:36:36.507217 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:36:36.507236 kernel: ima: No architecture policies found Nov 1 00:36:36.507251 kernel: clk: Disabling unused clocks Nov 1 00:36:36.507272 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 1 00:36:36.507287 kernel: Write protecting the kernel read-only data: 40960k Nov 1 00:36:36.507302 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 1 00:36:36.507316 kernel: Run /init as init process Nov 1 00:36:36.507330 kernel: with arguments: Nov 1 00:36:36.507345 kernel: /init Nov 1 00:36:36.507359 kernel: with environment: Nov 1 00:36:36.507373 kernel: HOME=/ Nov 1 00:36:36.507392 kernel: TERM=linux Nov 1 00:36:36.507406 kernel: ACPI: bus type USB registered Nov 1 00:36:36.507421 kernel: usbcore: registered new interface driver usbfs Nov 1 00:36:36.507435 kernel: usbcore: registered new interface driver hub Nov 1 00:36:36.507450 kernel: usbcore: registered new device driver usb Nov 1 00:36:36.507703 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 00:36:36.507962 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 1 00:36:36.508195 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 00:36:36.508419 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 00:36:36.508643 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 1 00:36:36.508903 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 1 00:36:36.509190 kernel: hub 1-0:1.0: USB hub found Nov 1 00:36:36.509432 kernel: hub 1-0:1.0: 4 ports detected Nov 1 00:36:36.509707 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 00:36:36.510000 kernel: hub 2-0:1.0: USB hub found Nov 1 00:36:36.510241 kernel: hub 2-0:1.0: 4 ports detected Nov 1 00:36:36.510263 kernel: SCSI subsystem initialized Nov 1 00:36:36.510279 kernel: libata version 3.00 loaded. Nov 1 00:36:36.510506 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:36:36.510529 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:36:36.510756 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 1 00:36:36.510997 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 1 00:36:36.511216 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:36:36.511481 kernel: scsi host0: ahci Nov 1 00:36:36.511740 kernel: scsi host1: ahci Nov 1 00:36:36.512006 kernel: scsi host2: ahci Nov 1 00:36:36.512244 kernel: scsi host3: ahci Nov 1 00:36:36.512481 kernel: scsi host4: ahci Nov 1 00:36:36.512737 kernel: scsi host5: ahci Nov 1 00:36:36.512767 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Nov 1 00:36:36.512783 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Nov 1 00:36:36.512798 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Nov 1 00:36:36.512830 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Nov 1 00:36:36.512845 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Nov 1 00:36:36.512859 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Nov 1 00:36:36.513142 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 00:36:36.513173 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:36:36.513188 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:36:36.513202 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 00:36:36.513217 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:36:36.513232 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:36:36.513246 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:36:36.513266 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:36:36.513515 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 1 00:36:36.513539 kernel: usbcore: registered new interface driver usbhid Nov 1 00:36:36.513765 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 00:36:36.513787 kernel: usbhid: USB HID core driver Nov 1 00:36:36.513802 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:36:36.513840 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 00:36:36.513855 kernel: GPT:25804799 != 125829119 Nov 1 00:36:36.514149 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 1 00:36:36.514179 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:36:36.514194 kernel: GPT:25804799 != 125829119 Nov 1 00:36:36.514208 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:36:36.514229 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:36:36.514253 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:36:36.514268 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:36:36.514282 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 1 00:36:36.514297 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 1 00:36:36.514311 kernel: raid6: sse2x4 gen() 13473 MB/s Nov 1 00:36:36.514326 kernel: raid6: sse2x2 gen() 9190 MB/s Nov 1 00:36:36.514345 kernel: raid6: sse2x1 gen() 5802 MB/s Nov 1 00:36:36.514359 kernel: raid6: using algorithm sse2x4 gen() 13473 MB/s Nov 1 00:36:36.514374 kernel: raid6: .... xor() 4535 MB/s, rmw enabled Nov 1 00:36:36.514388 kernel: raid6: using ssse3x2 recovery algorithm Nov 1 00:36:36.514403 kernel: xor: automatically using best checksumming function avx Nov 1 00:36:36.514417 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:36:36.514432 kernel: BTRFS: device fsid 95d044e5-fb6f-4378-956f-63399a32528b devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (194) Nov 1 00:36:36.514460 kernel: BTRFS info (device dm-0): first mount of filesystem 95d044e5-fb6f-4378-956f-63399a32528b Nov 1 00:36:36.514475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:36:36.514489 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:36:36.514511 kernel: BTRFS info (device dm-0): enabling free space tree Nov 1 00:36:36.514525 kernel: loop: module loaded Nov 1 00:36:36.514540 kernel: loop0: detected capacity change from 0 to 100120 Nov 1 00:36:36.514554 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:36:36.514581 systemd[1]: Successfully made /usr/ read-only. Nov 1 00:36:36.514600 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 00:36:36.514616 systemd[1]: Detected virtualization kvm. Nov 1 00:36:36.514631 systemd[1]: Detected architecture x86-64. Nov 1 00:36:36.514653 systemd[1]: Running in initrd. Nov 1 00:36:36.514668 systemd[1]: No hostname configured, using default hostname. Nov 1 00:36:36.514701 systemd[1]: Hostname set to . Nov 1 00:36:36.514717 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 00:36:36.514732 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:36:36.514747 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:36:36.514762 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:36:36.514778 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:36:36.514794 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:36:36.514842 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:36:36.514859 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:36:36.514875 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:36:36.514891 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:36:36.514906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:36:36.514928 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 1 00:36:36.514943 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:36:36.514959 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:36:36.514974 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:36:36.514989 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:36:36.515004 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:36:36.515019 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:36:36.515040 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:36:36.515056 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 1 00:36:36.515071 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:36:36.515086 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:36:36.515102 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:36:36.515117 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:36:36.515133 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:36:36.515153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:36:36.515168 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:36:36.515183 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:36:36.515199 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 1 00:36:36.515215 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:36:36.515230 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:36:36.515250 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:36:36.515266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:36:36.515326 systemd-journald[330]: Collecting audit messages is disabled. Nov 1 00:36:36.515366 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:36:36.515382 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:36:36.515398 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:36:36.515414 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:36:36.515435 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:36:36.515457 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:36:36.515472 kernel: Bridge firewalling registered Nov 1 00:36:36.515487 systemd-journald[330]: Journal started Nov 1 00:36:36.515513 systemd-journald[330]: Runtime Journal (/run/log/journal/32771f05e4c14bbb9870111db8d660ed) is 4.7M, max 37.9M, 33.1M free. Nov 1 00:36:36.485418 systemd-modules-load[333]: Inserted module 'br_netfilter' Nov 1 00:36:36.552942 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:36:36.555538 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:36:36.556687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:36:36.562852 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:36:36.565029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:36:36.569983 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:36:36.576001 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:36:36.594988 systemd-tmpfiles[351]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 1 00:36:36.599889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:36:36.601180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:36:36.607632 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:36:36.612065 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:36:36.614473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:36:36.618995 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:36:36.651386 dracut-cmdline[369]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=06aebf6c20a38bc11b85661c7362dc459d93d17de8abe6e1c0606dc6af554184 Nov 1 00:36:36.688692 systemd-resolved[365]: Positive Trust Anchors: Nov 1 00:36:36.688726 systemd-resolved[365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:36:36.688733 systemd-resolved[365]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 00:36:36.688777 systemd-resolved[365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:36:36.721701 systemd-resolved[365]: Defaulting to hostname 'linux'. Nov 1 00:36:36.725095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:36:36.725985 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:36:36.803853 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:36:36.821855 kernel: iscsi: registered transport (tcp) Nov 1 00:36:36.851937 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:36:36.852027 kernel: QLogic iSCSI HBA Driver Nov 1 00:36:36.888791 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:36:36.919373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:36:36.923585 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:36:36.993205 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:36:36.996260 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:36:36.998007 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:36:37.048118 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:36:37.052025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:36:37.088174 systemd-udevd[610]: Using default interface naming scheme 'v257'. Nov 1 00:36:37.104513 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:36:37.108191 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:36:37.145099 dracut-pre-trigger[677]: rd.md=0: removing MD RAID activation Nov 1 00:36:37.148893 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:36:37.152056 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:36:37.188636 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:36:37.192990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:36:37.218980 systemd-networkd[719]: lo: Link UP Nov 1 00:36:37.219893 systemd-networkd[719]: lo: Gained carrier Nov 1 00:36:37.220687 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:36:37.221546 systemd[1]: Reached target network.target - Network. Nov 1 00:36:37.344725 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:36:37.349259 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:36:37.510724 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:36:37.530079 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:36:37.557544 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:36:37.560431 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:36:37.590439 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:36:37.597845 disk-uuid[774]: Primary Header is updated. Nov 1 00:36:37.597845 disk-uuid[774]: Secondary Entries is updated. Nov 1 00:36:37.597845 disk-uuid[774]: Secondary Header is updated. Nov 1 00:36:37.607830 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:36:37.657965 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:36:37.658174 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:36:37.660720 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:36:37.666425 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:36:37.704831 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:36:37.729839 kernel: AES CTR mode by8 optimization enabled Nov 1 00:36:37.735791 systemd-networkd[719]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:36:37.735822 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:36:37.740294 systemd-networkd[719]: eth0: Link UP Nov 1 00:36:37.740661 systemd-networkd[719]: eth0: Gained carrier Nov 1 00:36:37.740677 systemd-networkd[719]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:36:37.764900 systemd-networkd[719]: eth0: DHCPv4 address 10.230.36.206/30, gateway 10.230.36.205 acquired from 10.230.36.205 Nov 1 00:36:37.861160 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:36:37.899348 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:36:37.901519 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:36:37.902351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:36:37.904040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:36:37.907039 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:36:37.936300 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:36:38.717014 disk-uuid[775]: Warning: The kernel is still using the old partition table. Nov 1 00:36:38.717014 disk-uuid[775]: The new table will be used at the next reboot or after you Nov 1 00:36:38.717014 disk-uuid[775]: run partprobe(8) or kpartx(8) Nov 1 00:36:38.717014 disk-uuid[775]: The operation has completed successfully. Nov 1 00:36:38.727525 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:36:38.727728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:36:38.730778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:36:38.798993 systemd-networkd[719]: eth0: Gained IPv6LL Nov 1 00:36:38.876861 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (866) Nov 1 00:36:38.890862 kernel: BTRFS info (device vda6): first mount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:36:38.890909 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:36:38.908738 kernel: BTRFS info (device vda6): turning on async discard Nov 1 00:36:38.908840 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 00:36:38.917860 kernel: BTRFS info (device vda6): last unmount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:36:38.918940 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:36:38.923005 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:36:39.142054 ignition[885]: Ignition 2.22.0 Nov 1 00:36:39.142081 ignition[885]: Stage: fetch-offline Nov 1 00:36:39.142171 ignition[885]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:39.142192 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:39.144579 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:36:39.142371 ignition[885]: parsed url from cmdline: "" Nov 1 00:36:39.142378 ignition[885]: no config URL provided Nov 1 00:36:39.142394 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:36:39.148031 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:36:39.142414 ignition[885]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:36:39.142438 ignition[885]: failed to fetch config: resource requires networking Nov 1 00:36:39.143118 ignition[885]: Ignition finished successfully Nov 1 00:36:39.190092 ignition[892]: Ignition 2.22.0 Nov 1 00:36:39.190118 ignition[892]: Stage: fetch Nov 1 00:36:39.190309 ignition[892]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:39.190326 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:39.190475 ignition[892]: parsed url from cmdline: "" Nov 1 00:36:39.190481 ignition[892]: no config URL provided Nov 1 00:36:39.190491 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:36:39.190504 ignition[892]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:36:39.190685 ignition[892]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 1 00:36:39.191199 ignition[892]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 1 00:36:39.191243 ignition[892]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 1 00:36:39.207734 ignition[892]: GET result: OK Nov 1 00:36:39.208088 ignition[892]: parsing config with SHA512: 570babe08204ba91350337b33d895ba1f0cf57db3b9e86596f5239c4165a0198d24703ad2bd7d5f0853340056ee57b96ef4266a96360e153f208ba82c71975de Nov 1 00:36:39.214162 unknown[892]: fetched base config from "system" Nov 1 00:36:39.214746 ignition[892]: fetch: fetch complete Nov 1 00:36:39.214187 unknown[892]: fetched base config from "system" Nov 1 00:36:39.214756 ignition[892]: fetch: fetch passed Nov 1 00:36:39.214198 unknown[892]: fetched user config from "openstack" Nov 1 00:36:39.216879 ignition[892]: Ignition finished successfully Nov 1 00:36:39.220288 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:36:39.229750 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:36:39.272512 ignition[899]: Ignition 2.22.0 Nov 1 00:36:39.273598 ignition[899]: Stage: kargs Nov 1 00:36:39.273797 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:39.273831 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:39.274895 ignition[899]: kargs: kargs passed Nov 1 00:36:39.277291 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:36:39.274991 ignition[899]: Ignition finished successfully Nov 1 00:36:39.281030 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:36:39.319052 ignition[905]: Ignition 2.22.0 Nov 1 00:36:39.319072 ignition[905]: Stage: disks Nov 1 00:36:39.319276 ignition[905]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:39.319294 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:39.323914 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:36:39.320730 ignition[905]: disks: disks passed Nov 1 00:36:39.320801 ignition[905]: Ignition finished successfully Nov 1 00:36:39.326104 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:36:39.326994 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:36:39.328459 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:36:39.331098 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:36:39.332529 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:36:39.336006 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:36:39.386708 systemd-networkd[719]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8933:24:19ff:fee6:24ce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8933:24:19ff:fee6:24ce/64 assigned by NDisc. Nov 1 00:36:39.386721 systemd-networkd[719]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 00:36:39.392099 systemd-fsck[914]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 1 00:36:39.397917 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:36:39.402024 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:36:39.542836 kernel: EXT4-fs (vda9): mounted filesystem 64a17da1-5670-45af-8ec7-07540a245d0c r/w with ordered data mode. Quota mode: none. Nov 1 00:36:39.544194 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:36:39.545534 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:36:39.548317 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:36:39.550845 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:36:39.554065 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:36:39.561984 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 1 00:36:39.564958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:36:39.565020 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:36:39.569423 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:36:39.575833 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (922) Nov 1 00:36:39.582895 kernel: BTRFS info (device vda6): first mount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:36:39.582956 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:36:39.579153 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:36:39.603724 kernel: BTRFS info (device vda6): turning on async discard Nov 1 00:36:39.603795 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 00:36:39.610034 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:36:39.681832 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:39.699292 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:36:39.709840 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:36:39.716096 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:36:39.722748 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:36:39.843784 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:36:39.846520 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:36:39.849042 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:36:39.874223 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:36:39.877488 kernel: BTRFS info (device vda6): last unmount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:36:39.902408 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:36:39.923596 ignition[1039]: INFO : Ignition 2.22.0 Nov 1 00:36:39.923596 ignition[1039]: INFO : Stage: mount Nov 1 00:36:39.925375 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:39.925375 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:39.925375 ignition[1039]: INFO : mount: mount passed Nov 1 00:36:39.925375 ignition[1039]: INFO : Ignition finished successfully Nov 1 00:36:39.927200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:36:40.714840 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:42.727842 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:46.734857 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:46.740323 coreos-metadata[924]: Nov 01 00:36:46.740 WARN failed to locate config-drive, using the metadata service API instead Nov 1 00:36:46.766431 coreos-metadata[924]: Nov 01 00:36:46.766 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 00:36:46.785577 coreos-metadata[924]: Nov 01 00:36:46.785 INFO Fetch successful Nov 1 00:36:46.786495 coreos-metadata[924]: Nov 01 00:36:46.785 INFO wrote hostname srv-nthov.gb1.brightbox.com to /sysroot/etc/hostname Nov 1 00:36:46.788188 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 1 00:36:46.788372 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 1 00:36:46.792896 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:36:46.816530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:36:46.843850 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1055) Nov 1 00:36:46.849697 kernel: BTRFS info (device vda6): first mount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:36:46.849750 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:36:46.855770 kernel: BTRFS info (device vda6): turning on async discard Nov 1 00:36:46.855825 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 00:36:46.859111 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:36:46.895528 ignition[1073]: INFO : Ignition 2.22.0 Nov 1 00:36:46.895528 ignition[1073]: INFO : Stage: files Nov 1 00:36:46.897482 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:46.897482 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:46.897482 ignition[1073]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:36:46.900385 ignition[1073]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:36:46.900385 ignition[1073]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:36:46.903693 ignition[1073]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:36:46.904782 ignition[1073]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:36:46.906198 unknown[1073]: wrote ssh authorized keys file for user: core Nov 1 00:36:46.907167 ignition[1073]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:36:46.908352 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:36:46.909654 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:36:47.102106 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:36:47.336535 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:36:47.338088 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:36:47.352937 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:36:47.352937 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:36:47.352937 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:36:47.352937 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:36:47.352937 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:36:47.352937 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:36:47.814151 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:36:50.397276 ignition[1073]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:36:50.397276 ignition[1073]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:36:50.400604 ignition[1073]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:36:50.401867 ignition[1073]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:36:50.401867 ignition[1073]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:36:50.401867 ignition[1073]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:36:50.401867 ignition[1073]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:36:50.401867 ignition[1073]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:36:50.415447 ignition[1073]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:36:50.415447 ignition[1073]: INFO : files: files passed Nov 1 00:36:50.415447 ignition[1073]: INFO : Ignition finished successfully Nov 1 00:36:50.407498 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:36:50.418132 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:36:50.421252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:36:50.437189 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:36:50.437929 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:36:50.452515 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:36:50.452515 initrd-setup-root-after-ignition[1105]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:36:50.455631 initrd-setup-root-after-ignition[1109]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:36:50.456351 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:36:50.458546 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:36:50.461141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:36:50.527841 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:36:50.528033 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:36:50.531493 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:36:50.532296 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:36:50.534397 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:36:50.535784 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:36:50.582079 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:36:50.585257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:36:50.616028 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:36:50.616288 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:36:50.619067 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:36:50.620098 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:36:50.620895 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:36:50.621124 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:36:50.623401 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:36:50.624344 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:36:50.625905 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:36:50.627415 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:36:50.629045 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:36:50.630646 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 1 00:36:50.632289 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:36:50.634024 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:36:50.635598 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:36:50.637241 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:36:50.638682 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:36:50.640214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:36:50.640522 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:36:50.642228 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:36:50.643195 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:36:50.644566 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:36:50.644796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:36:50.646327 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:36:50.646690 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:36:50.648473 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:36:50.648785 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:36:50.650769 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:36:50.651142 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:36:50.655153 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:36:50.659181 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:36:50.660307 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:36:50.660597 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:36:50.663494 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:36:50.663742 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:36:50.665601 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:36:50.665902 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:36:50.675251 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:36:50.677924 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:36:50.703625 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:36:50.709781 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:36:50.710413 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:36:50.721379 ignition[1129]: INFO : Ignition 2.22.0 Nov 1 00:36:50.721379 ignition[1129]: INFO : Stage: umount Nov 1 00:36:50.723038 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:36:50.723038 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 00:36:50.723038 ignition[1129]: INFO : umount: umount passed Nov 1 00:36:50.723038 ignition[1129]: INFO : Ignition finished successfully Nov 1 00:36:50.725293 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:36:50.725488 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:36:50.726779 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:36:50.727136 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:36:50.728481 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:36:50.728552 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:36:50.729830 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:36:50.729914 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:36:50.731276 systemd[1]: Stopped target network.target - Network. Nov 1 00:36:50.732543 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:36:50.732622 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:36:50.733998 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:36:50.735250 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:36:50.738941 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:36:50.740009 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:36:50.741303 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:36:50.742933 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:36:50.743003 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:36:50.744517 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:36:50.744576 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:36:50.745848 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:36:50.745929 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:36:50.747198 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:36:50.747268 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:36:50.748569 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:36:50.748667 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:36:50.751083 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:36:50.752109 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:36:50.764845 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:36:50.765938 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:36:50.769779 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:36:50.770054 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:36:50.774645 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 1 00:36:50.775529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:36:50.775622 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:36:50.778206 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:36:50.779449 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:36:50.779537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:36:50.782119 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:36:50.782190 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:36:50.784290 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:36:50.784387 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:36:50.785141 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:36:50.797516 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:36:50.797729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:36:50.801783 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:36:50.801938 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:36:50.803054 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:36:50.803119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:36:50.807935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:36:50.808040 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:36:50.809651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:36:50.809744 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:36:50.811172 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:36:50.811257 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:36:50.813879 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:36:50.816290 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 1 00:36:50.816392 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:36:50.819229 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:36:50.819322 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:36:50.820707 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:36:50.820786 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:36:50.823264 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:36:50.823351 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:36:50.825906 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:36:50.825990 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:36:50.840823 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:36:50.841038 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:36:50.850747 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:36:50.850970 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:36:50.853105 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:36:50.855079 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:36:50.882016 systemd[1]: Switching root. Nov 1 00:36:50.928344 systemd-journald[330]: Received SIGTERM from PID 1 (systemd). Nov 1 00:36:50.928454 systemd-journald[330]: Journal stopped Nov 1 00:36:52.513331 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:36:52.513452 kernel: SELinux: policy capability open_perms=1 Nov 1 00:36:52.513488 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:36:52.513531 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:36:52.513566 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:36:52.513592 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:36:52.513617 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:36:52.513651 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:36:52.513683 kernel: SELinux: policy capability userspace_initial_context=0 Nov 1 00:36:52.513715 kernel: audit: type=1403 audit(1761957411.192:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:36:52.513750 systemd[1]: Successfully loaded SELinux policy in 87.312ms. Nov 1 00:36:52.515849 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.390ms. Nov 1 00:36:52.515888 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 00:36:52.515913 systemd[1]: Detected virtualization kvm. Nov 1 00:36:52.515933 systemd[1]: Detected architecture x86-64. Nov 1 00:36:52.515953 systemd[1]: Detected first boot. Nov 1 00:36:52.515986 systemd[1]: Hostname set to . Nov 1 00:36:52.516015 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 00:36:52.516054 zram_generator::config[1177]: No configuration found. Nov 1 00:36:52.516084 kernel: Guest personality initialized and is inactive Nov 1 00:36:52.516113 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 00:36:52.516148 kernel: Initialized host personality Nov 1 00:36:52.516176 kernel: NET: Registered PF_VSOCK protocol family Nov 1 00:36:52.516203 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:36:52.516233 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:36:52.516279 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:36:52.516311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:36:52.516335 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:36:52.516364 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:36:52.516387 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:36:52.516416 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:36:52.516451 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:36:52.516476 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:36:52.516497 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:36:52.516518 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:36:52.516540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:36:52.516561 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:36:52.516589 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:36:52.516624 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:36:52.516648 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:36:52.516671 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:36:52.516692 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:36:52.516722 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:36:52.516759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:36:52.516783 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:36:52.516827 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:36:52.516853 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:36:52.516875 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:36:52.516897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:36:52.516947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:36:52.516972 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:36:52.516993 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:36:52.517014 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:36:52.517035 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:36:52.517057 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 1 00:36:52.517090 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:36:52.517126 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:36:52.517156 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:36:52.517185 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:36:52.517208 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:36:52.517230 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:36:52.517258 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:36:52.517307 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:52.517332 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:36:52.517361 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:36:52.517384 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:36:52.517413 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:36:52.517442 systemd[1]: Reached target machines.target - Containers. Nov 1 00:36:52.517465 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:36:52.517499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:36:52.517523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:36:52.517544 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:36:52.517567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:36:52.517588 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:36:52.517609 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:36:52.517631 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:36:52.517667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:36:52.517690 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:36:52.517711 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:36:52.517733 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:36:52.517754 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:36:52.517776 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:36:52.517798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 00:36:52.522841 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:36:52.522872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:36:52.522895 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:36:52.522916 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:36:52.522938 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 1 00:36:52.522975 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:36:52.523000 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:52.523031 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:36:52.523061 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:36:52.523085 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:36:52.523126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:36:52.523157 kernel: fuse: init (API version 7.41) Nov 1 00:36:52.523185 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:36:52.523207 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:36:52.523236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:36:52.523277 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:36:52.523308 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:36:52.523344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:36:52.523368 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:36:52.523402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:36:52.523426 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:36:52.523459 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:36:52.523494 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:36:52.523517 kernel: ACPI: bus type drm_connector registered Nov 1 00:36:52.523539 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:36:52.523560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:36:52.523581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:36:52.523603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:36:52.523637 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:36:52.523660 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:36:52.523682 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:36:52.523705 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:36:52.523763 systemd-journald[1267]: Collecting audit messages is disabled. Nov 1 00:36:52.526160 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 1 00:36:52.526206 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:36:52.526230 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 1 00:36:52.526290 systemd-journald[1267]: Journal started Nov 1 00:36:52.526343 systemd-journald[1267]: Runtime Journal (/run/log/journal/32771f05e4c14bbb9870111db8d660ed) is 4.7M, max 37.9M, 33.1M free. Nov 1 00:36:52.054762 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:36:52.071345 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:36:52.528911 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:36:52.072241 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:36:52.542225 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:36:52.542311 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:36:52.546882 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:36:52.550992 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 1 00:36:52.554840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:36:52.558846 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:36:52.562832 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:36:52.570763 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:36:52.570838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:36:52.582883 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:36:52.586859 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:36:52.596880 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:36:52.603161 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:36:52.607303 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:36:52.610287 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:36:52.622483 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:36:52.642477 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:36:52.649563 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:36:52.653411 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 1 00:36:52.654896 kernel: loop1: detected capacity change from 0 to 110984 Nov 1 00:36:52.658565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:36:52.689961 systemd-journald[1267]: Time spent on flushing to /var/log/journal/32771f05e4c14bbb9870111db8d660ed is 93.184ms for 1151 entries. Nov 1 00:36:52.689961 systemd-journald[1267]: System Journal (/var/log/journal/32771f05e4c14bbb9870111db8d660ed) is 8M, max 588.1M, 580.1M free. Nov 1 00:36:52.806064 systemd-journald[1267]: Received client request to flush runtime journal. Nov 1 00:36:52.806129 kernel: loop2: detected capacity change from 0 to 128048 Nov 1 00:36:52.806174 kernel: loop3: detected capacity change from 0 to 224512 Nov 1 00:36:52.714000 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Nov 1 00:36:52.714022 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. Nov 1 00:36:52.732962 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:36:52.741202 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:36:52.753941 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 1 00:36:52.808578 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:36:52.820844 kernel: loop4: detected capacity change from 0 to 8 Nov 1 00:36:52.842408 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:36:52.845710 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:36:52.855074 kernel: loop5: detected capacity change from 0 to 110984 Nov 1 00:36:52.852936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:36:52.855639 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:36:52.874857 kernel: loop6: detected capacity change from 0 to 128048 Nov 1 00:36:52.884171 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:36:52.911082 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Nov 1 00:36:52.911853 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Nov 1 00:36:52.926099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:36:52.929841 kernel: loop7: detected capacity change from 0 to 224512 Nov 1 00:36:52.951994 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:36:52.958034 kernel: loop1: detected capacity change from 0 to 8 Nov 1 00:36:52.962232 (sd-merge)[1333]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Nov 1 00:36:52.974963 (sd-merge)[1333]: Merged extensions into '/usr'. Nov 1 00:36:52.983060 systemd[1]: Reload requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:36:52.983108 systemd[1]: Reloading... Nov 1 00:36:53.095999 systemd-resolved[1334]: Positive Trust Anchors: Nov 1 00:36:53.099869 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:36:53.099883 systemd-resolved[1334]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 00:36:53.099929 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:36:53.132406 systemd-resolved[1334]: Using system hostname 'srv-nthov.gb1.brightbox.com'. Nov 1 00:36:53.147869 zram_generator::config[1371]: No configuration found. Nov 1 00:36:53.455721 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:36:53.456141 systemd[1]: Reloading finished in 472 ms. Nov 1 00:36:53.483506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:36:53.485463 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:36:53.490543 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:36:53.508030 systemd[1]: Starting ensure-sysext.service... Nov 1 00:36:53.510464 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:36:53.551319 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 1 00:36:53.551377 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 1 00:36:53.551865 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:36:53.552358 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:36:53.553779 systemd[1]: Reload requested from client PID 1427 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:36:53.553838 systemd[1]: Reloading... Nov 1 00:36:53.555033 systemd-tmpfiles[1428]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:36:53.555479 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Nov 1 00:36:53.555577 systemd-tmpfiles[1428]: ACLs are not supported, ignoring. Nov 1 00:36:53.564280 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:36:53.564399 systemd-tmpfiles[1428]: Skipping /boot Nov 1 00:36:53.587632 systemd-tmpfiles[1428]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:36:53.587778 systemd-tmpfiles[1428]: Skipping /boot Nov 1 00:36:53.665876 zram_generator::config[1461]: No configuration found. Nov 1 00:36:53.955249 systemd[1]: Reloading finished in 400 ms. Nov 1 00:36:53.981929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:36:53.998523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:36:54.012065 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 00:36:54.017130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:36:54.023159 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:36:54.026594 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:36:54.031955 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:36:54.036875 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:36:54.041322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:54.041612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:36:54.044141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:36:54.060336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:36:54.068059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:36:54.068975 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:36:54.069165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 00:36:54.069343 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:54.075650 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:54.076612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:36:54.076896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:36:54.077033 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 00:36:54.077168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:54.086466 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:54.087850 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:36:54.101922 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:36:54.102917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:36:54.103001 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 00:36:54.103111 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:36:54.104954 systemd[1]: Finished ensure-sysext.service. Nov 1 00:36:54.117236 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:36:54.135455 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:36:54.201977 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:36:54.218235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:36:54.219504 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:36:54.221470 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:36:54.224174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:36:54.224589 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:36:54.231641 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:36:54.232114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:36:54.234307 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:36:54.251885 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:36:54.252323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:36:54.255557 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:36:54.259333 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:36:54.269440 systemd-udevd[1520]: Using default interface naming scheme 'v257'. Nov 1 00:36:54.285026 augenrules[1556]: No rules Nov 1 00:36:54.287868 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:36:54.288925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 00:36:54.334095 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:36:54.335238 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:36:54.358739 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:36:54.364538 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:36:54.574246 systemd-networkd[1568]: lo: Link UP Nov 1 00:36:54.574263 systemd-networkd[1568]: lo: Gained carrier Nov 1 00:36:54.581609 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:36:54.583071 systemd[1]: Reached target network.target - Network. Nov 1 00:36:54.586904 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 1 00:36:54.590898 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:36:54.650559 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 1 00:36:54.669962 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:36:54.779724 systemd-networkd[1568]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:36:54.779740 systemd-networkd[1568]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:36:54.782065 systemd-networkd[1568]: eth0: Link UP Nov 1 00:36:54.783431 systemd-networkd[1568]: eth0: Gained carrier Nov 1 00:36:54.783454 systemd-networkd[1568]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:36:54.801297 systemd-networkd[1568]: eth0: DHCPv4 address 10.230.36.206/30, gateway 10.230.36.205 acquired from 10.230.36.205 Nov 1 00:36:54.809342 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Nov 1 00:36:54.810775 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:36:54.826877 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:36:54.831868 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:36:54.859840 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 1 00:36:54.865838 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:36:54.881176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:36:54.904818 ldconfig[1518]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:36:54.910941 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:36:54.916728 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:36:54.950517 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:36:54.953767 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:36:54.955970 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:36:54.957693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:36:54.958906 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 1 00:36:54.960455 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:36:54.961660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:36:54.962892 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:36:54.963876 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:36:54.963929 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:36:54.964989 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:36:54.967918 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:36:54.973929 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:36:54.980573 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 1 00:36:54.982564 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 1 00:36:54.983888 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 1 00:36:54.987836 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:36:55.007217 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:36:55.017832 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:36:55.026360 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 1 00:36:55.029465 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:36:55.031235 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:36:55.034081 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:36:55.035948 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:36:55.036015 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:36:55.039057 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:36:55.046114 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:36:55.051104 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:36:55.061103 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:36:55.068047 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:36:55.073106 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:36:55.074891 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:36:55.082558 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 1 00:36:55.091315 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:36:55.099716 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:36:55.103846 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:55.104038 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:36:55.110768 extend-filesystems[1616]: Found /dev/vda6 Nov 1 00:36:55.111143 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:36:55.121335 extend-filesystems[1616]: Found /dev/vda9 Nov 1 00:36:55.128108 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:36:55.129903 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:36:55.130627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:36:55.139105 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:36:55.150443 jq[1615]: false Nov 1 00:36:55.153699 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:36:55.170033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:36:55.171476 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:36:55.171766 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:36:55.176205 extend-filesystems[1616]: Checking size of /dev/vda9 Nov 1 00:36:55.183806 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:36:55.184802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:36:55.190027 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Refreshing passwd entry cache Nov 1 00:36:55.190039 oslogin_cache_refresh[1617]: Refreshing passwd entry cache Nov 1 00:36:55.216426 jq[1632]: true Nov 1 00:36:55.228863 update_engine[1631]: I20251101 00:36:55.227952 1631 main.cc:92] Flatcar Update Engine starting Nov 1 00:36:55.246630 extend-filesystems[1616]: Resized partition /dev/vda9 Nov 1 00:36:55.247271 dbus-daemon[1613]: [system] SELinux support is enabled Nov 1 00:36:55.247551 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:36:55.252274 (ntainerd)[1648]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:36:55.257982 extend-filesystems[1659]: resize2fs 1.47.3 (8-Jul-2025) Nov 1 00:36:55.262713 jq[1654]: true Nov 1 00:36:55.272823 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Failure getting users, quitting Nov 1 00:36:55.272823 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 00:36:55.272823 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Refreshing group entry cache Nov 1 00:36:55.271109 oslogin_cache_refresh[1617]: Failure getting users, quitting Nov 1 00:36:55.271148 oslogin_cache_refresh[1617]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 00:36:55.271233 oslogin_cache_refresh[1617]: Refreshing group entry cache Nov 1 00:36:55.278234 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:36:55.278354 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:36:55.279320 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:36:55.279352 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:36:55.280732 dbus-daemon[1613]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1568 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 00:36:55.280980 tar[1636]: linux-amd64/LICENSE Nov 1 00:36:55.285089 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Failure getting groups, quitting Nov 1 00:36:55.285089 google_oslogin_nss_cache[1617]: oslogin_cache_refresh[1617]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 00:36:55.285227 tar[1636]: linux-amd64/helm Nov 1 00:36:55.282852 oslogin_cache_refresh[1617]: Failure getting groups, quitting Nov 1 00:36:55.282871 oslogin_cache_refresh[1617]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 00:36:55.288514 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 1 00:36:55.288954 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 1 00:36:55.295620 dbus-daemon[1613]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:36:55.299904 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Nov 1 00:36:55.301720 update_engine[1631]: I20251101 00:36:55.301640 1631 update_check_scheduler.cc:74] Next update check in 9m0s Nov 1 00:36:55.306896 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 00:36:55.307705 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:36:55.332880 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:36:55.336517 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:36:55.336905 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:36:55.357919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:36:55.549774 bash[1687]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:36:55.551332 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:36:55.560653 systemd[1]: Starting sshkeys.service... Nov 1 00:36:55.624834 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 1 00:36:55.665756 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 00:36:55.691175 extend-filesystems[1659]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:36:55.691175 extend-filesystems[1659]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 1 00:36:55.691175 extend-filesystems[1659]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 1 00:36:55.670925 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 00:36:55.700154 extend-filesystems[1616]: Resized filesystem in /dev/vda9 Nov 1 00:36:55.689310 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:36:55.689675 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:36:55.767625 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:55.858091 sshd_keygen[1660]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:36:55.864975 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:36:55.948376 containerd[1648]: time="2025-11-01T00:36:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 1 00:36:55.969999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:36:55.980153 containerd[1648]: time="2025-11-01T00:36:55.976449853Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 1 00:36:55.997827 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:36:56.007946 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:36:56.042228 containerd[1648]: time="2025-11-01T00:36:56.042159615Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.026µs" Nov 1 00:36:56.042228 containerd[1648]: time="2025-11-01T00:36:56.042218272Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 1 00:36:56.042361 containerd[1648]: time="2025-11-01T00:36:56.042252697Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 1 00:36:56.042542 containerd[1648]: time="2025-11-01T00:36:56.042511837Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 1 00:36:56.042594 containerd[1648]: time="2025-11-01T00:36:56.042544625Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 1 00:36:56.042630 containerd[1648]: time="2025-11-01T00:36:56.042596574Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 00:36:56.042897 containerd[1648]: time="2025-11-01T00:36:56.042699746Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 00:36:56.042897 containerd[1648]: time="2025-11-01T00:36:56.042726582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 00:36:56.058018 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:36:56.059915 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:36:56.067137 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.072416871Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.072466211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.072492859Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.072507273Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.072700444Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.073142507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.073212002Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.073232164Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.073283081Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.073671123Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 1 00:36:56.075841 containerd[1648]: time="2025-11-01T00:36:56.073760957Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091082996Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091228446Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091265641Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091292411Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091312710Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091363447Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091393352Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091419294Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091439867Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091456168Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091470615Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091496011Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091699884Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 1 00:36:56.093056 containerd[1648]: time="2025-11-01T00:36:56.091755322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091780842Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091798047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091846150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091873192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091893766Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091915470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091945361Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.091992067Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.092044379Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.092208559Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.092242079Z" level=info msg="Start snapshots syncer" Nov 1 00:36:56.093576 containerd[1648]: time="2025-11-01T00:36:56.092281669Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 1 00:36:56.103534 containerd[1648]: time="2025-11-01T00:36:56.092583967Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 1 00:36:56.103534 containerd[1648]: time="2025-11-01T00:36:56.092684397Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 1 00:36:56.101903 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096569417Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096742018Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096802322Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096846338Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096866298Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096892610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096911631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096928074Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.096978436Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.097005478Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.097025095Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.097088877Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.097114638Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 00:36:56.104391 containerd[1648]: time="2025-11-01T00:36:56.097129541Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097163373Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097191004Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097209269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097252629Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097296434Z" level=info msg="runtime interface created" Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097308439Z" level=info msg="created NRI interface" Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097335189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097357639Z" level=info msg="Connect containerd service" Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.097394088Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:36:56.115516 containerd[1648]: time="2025-11-01T00:36:56.104682077Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:36:56.109068 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:36:56.117394 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:36:56.118645 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:36:56.121665 systemd-logind[1630]: Watching system buttons on /dev/input/event3 (Power Button) Nov 1 00:36:56.121699 systemd-logind[1630]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:36:56.138751 systemd-logind[1630]: New seat seat0. Nov 1 00:36:56.145024 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:36:56.242550 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 00:36:56.245059 dbus-daemon[1613]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 00:36:56.246696 dbus-daemon[1613]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1665 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 00:36:56.280949 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 00:36:56.383982 containerd[1648]: time="2025-11-01T00:36:56.383886013Z" level=info msg="Start subscribing containerd event" Nov 1 00:36:56.384176 containerd[1648]: time="2025-11-01T00:36:56.384048488Z" level=info msg="Start recovering state" Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384395862Z" level=info msg="Start event monitor" Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384456116Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384482666Z" level=info msg="Start streaming server" Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384534926Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384551911Z" level=info msg="runtime interface starting up..." Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384562553Z" level=info msg="starting plugins..." Nov 1 00:36:56.384860 containerd[1648]: time="2025-11-01T00:36:56.384641201Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 1 00:36:56.388830 containerd[1648]: time="2025-11-01T00:36:56.388120439Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:36:56.388830 containerd[1648]: time="2025-11-01T00:36:56.388279401Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:36:56.388830 containerd[1648]: time="2025-11-01T00:36:56.388399034Z" level=info msg="containerd successfully booted in 0.442954s" Nov 1 00:36:56.388664 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:36:56.421289 polkitd[1734]: Started polkitd version 126 Nov 1 00:36:56.427421 polkitd[1734]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 00:36:56.427930 polkitd[1734]: Loading rules from directory /run/polkit-1/rules.d Nov 1 00:36:56.428023 polkitd[1734]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 1 00:36:56.428401 polkitd[1734]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 1 00:36:56.428459 polkitd[1734]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 1 00:36:56.428523 polkitd[1734]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 00:36:56.429713 polkitd[1734]: Finished loading, compiling and executing 2 rules Nov 1 00:36:56.430235 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 00:36:56.431544 dbus-daemon[1613]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 00:36:56.432548 polkitd[1734]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 00:36:56.446397 systemd-hostnamed[1665]: Hostname set to (static) Nov 1 00:36:56.541523 tar[1636]: linux-amd64/README.md Nov 1 00:36:56.564730 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:36:56.783324 systemd-networkd[1568]: eth0: Gained IPv6LL Nov 1 00:36:56.785874 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Nov 1 00:36:56.788059 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:36:56.791041 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:36:56.795407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:36:56.800304 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:36:56.843327 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:36:57.224461 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:57.224922 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:57.867885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:36:57.888583 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:36:58.271936 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Nov 1 00:36:58.275982 systemd-networkd[1568]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8933:24:19ff:fee6:24ce/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8933:24:19ff:fee6:24ce/64 assigned by NDisc. Nov 1 00:36:58.275993 systemd-networkd[1568]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 00:36:58.515950 kubelet[1771]: E1101 00:36:58.515752 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:36:58.519221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:36:58.519652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:36:58.520893 systemd[1]: kubelet.service: Consumed 1.132s CPU time, 263.9M memory peak. Nov 1 00:36:59.251772 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:36:59.251942 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:37:00.303892 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Nov 1 00:37:00.456344 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:37:00.459690 systemd[1]: Started sshd@0-10.230.36.206:22-139.178.89.65:54974.service - OpenSSH per-connection server daemon (139.178.89.65:54974). Nov 1 00:37:01.391069 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 54974 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:01.393263 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:01.416139 systemd-logind[1630]: New session 1 of user core. Nov 1 00:37:01.419852 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:37:01.422187 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:37:01.468088 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:37:01.473158 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:37:01.494165 (systemd)[1786]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:37:01.509713 systemd-logind[1630]: New session c1 of user core. Nov 1 00:37:01.559478 login[1728]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:37:01.575204 systemd-logind[1630]: New session 2 of user core. Nov 1 00:37:01.585212 login[1725]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:37:01.593065 systemd-logind[1630]: New session 3 of user core. Nov 1 00:37:01.719118 systemd[1786]: Queued start job for default target default.target. Nov 1 00:37:01.731355 systemd[1786]: Created slice app.slice - User Application Slice. Nov 1 00:37:01.731407 systemd[1786]: Reached target paths.target - Paths. Nov 1 00:37:01.731488 systemd[1786]: Reached target timers.target - Timers. Nov 1 00:37:01.733798 systemd[1786]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:37:01.749574 systemd[1786]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:37:01.749784 systemd[1786]: Reached target sockets.target - Sockets. Nov 1 00:37:01.749895 systemd[1786]: Reached target basic.target - Basic System. Nov 1 00:37:01.749975 systemd[1786]: Reached target default.target - Main User Target. Nov 1 00:37:01.750062 systemd[1786]: Startup finished in 210ms. Nov 1 00:37:01.750293 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:37:01.762240 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:37:01.763750 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:37:01.765117 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:37:02.403166 systemd[1]: Started sshd@1-10.230.36.206:22-139.178.89.65:54988.service - OpenSSH per-connection server daemon (139.178.89.65:54988). Nov 1 00:37:03.266907 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:37:03.272874 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 1 00:37:03.284029 coreos-metadata[1612]: Nov 01 00:37:03.283 WARN failed to locate config-drive, using the metadata service API instead Nov 1 00:37:03.284600 coreos-metadata[1695]: Nov 01 00:37:03.283 WARN failed to locate config-drive, using the metadata service API instead Nov 1 00:37:03.309823 coreos-metadata[1695]: Nov 01 00:37:03.309 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 1 00:37:03.311179 coreos-metadata[1612]: Nov 01 00:37:03.311 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 1 00:37:03.323540 coreos-metadata[1612]: Nov 01 00:37:03.323 INFO Fetch failed with 404: resource not found Nov 1 00:37:03.323768 coreos-metadata[1612]: Nov 01 00:37:03.323 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 00:37:03.324623 coreos-metadata[1612]: Nov 01 00:37:03.324 INFO Fetch successful Nov 1 00:37:03.324623 coreos-metadata[1612]: Nov 01 00:37:03.324 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 1 00:37:03.330230 sshd[1824]: Accepted publickey for core from 139.178.89.65 port 54988 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:03.332305 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:03.341278 systemd-logind[1630]: New session 4 of user core. Nov 1 00:37:03.342625 coreos-metadata[1612]: Nov 01 00:37:03.342 INFO Fetch successful Nov 1 00:37:03.342625 coreos-metadata[1612]: Nov 01 00:37:03.342 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 1 00:37:03.342731 coreos-metadata[1695]: Nov 01 00:37:03.342 INFO Fetch successful Nov 1 00:37:03.342904 coreos-metadata[1695]: Nov 01 00:37:03.342 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 00:37:03.349148 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:37:03.357468 coreos-metadata[1612]: Nov 01 00:37:03.357 INFO Fetch successful Nov 1 00:37:03.357468 coreos-metadata[1612]: Nov 01 00:37:03.357 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 1 00:37:03.370780 coreos-metadata[1695]: Nov 01 00:37:03.370 INFO Fetch successful Nov 1 00:37:03.373378 unknown[1695]: wrote ssh authorized keys file for user: core Nov 1 00:37:03.374504 coreos-metadata[1612]: Nov 01 00:37:03.374 INFO Fetch successful Nov 1 00:37:03.374768 coreos-metadata[1612]: Nov 01 00:37:03.374 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 1 00:37:03.399862 coreos-metadata[1612]: Nov 01 00:37:03.399 INFO Fetch successful Nov 1 00:37:03.403461 update-ssh-keys[1833]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:37:03.406461 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 00:37:03.410595 systemd[1]: Finished sshkeys.service. Nov 1 00:37:03.434304 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:37:03.436587 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:37:03.437306 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:37:03.437714 systemd[1]: Startup finished in 3.061s (kernel) + 15.177s (initrd) + 12.328s (userspace) = 30.567s. Nov 1 00:37:03.953715 sshd[1831]: Connection closed by 139.178.89.65 port 54988 Nov 1 00:37:03.954697 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:03.961294 systemd[1]: sshd@1-10.230.36.206:22-139.178.89.65:54988.service: Deactivated successfully. Nov 1 00:37:03.964102 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:37:03.965489 systemd-logind[1630]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:37:03.967563 systemd-logind[1630]: Removed session 4. Nov 1 00:37:04.113016 systemd[1]: Started sshd@2-10.230.36.206:22-139.178.89.65:54994.service - OpenSSH per-connection server daemon (139.178.89.65:54994). Nov 1 00:37:05.035385 sshd[1846]: Accepted publickey for core from 139.178.89.65 port 54994 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:05.037081 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:05.044867 systemd-logind[1630]: New session 5 of user core. Nov 1 00:37:05.052054 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:37:05.659182 sshd[1849]: Connection closed by 139.178.89.65 port 54994 Nov 1 00:37:05.660066 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:05.665110 systemd-logind[1630]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:37:05.665258 systemd[1]: sshd@2-10.230.36.206:22-139.178.89.65:54994.service: Deactivated successfully. Nov 1 00:37:05.667451 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:37:05.670041 systemd-logind[1630]: Removed session 5. Nov 1 00:37:05.816558 systemd[1]: Started sshd@3-10.230.36.206:22-139.178.89.65:47562.service - OpenSSH per-connection server daemon (139.178.89.65:47562). Nov 1 00:37:06.730547 sshd[1855]: Accepted publickey for core from 139.178.89.65 port 47562 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:06.732358 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:06.739106 systemd-logind[1630]: New session 6 of user core. Nov 1 00:37:06.750068 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:37:07.353720 sshd[1858]: Connection closed by 139.178.89.65 port 47562 Nov 1 00:37:07.354967 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:07.362082 systemd[1]: sshd@3-10.230.36.206:22-139.178.89.65:47562.service: Deactivated successfully. Nov 1 00:37:07.364947 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:37:07.366061 systemd-logind[1630]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:37:07.368411 systemd-logind[1630]: Removed session 6. Nov 1 00:37:07.510450 systemd[1]: Started sshd@4-10.230.36.206:22-139.178.89.65:47568.service - OpenSSH per-connection server daemon (139.178.89.65:47568). Nov 1 00:37:08.417677 sshd[1864]: Accepted publickey for core from 139.178.89.65 port 47568 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:08.419974 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:08.427803 systemd-logind[1630]: New session 7 of user core. Nov 1 00:37:08.435030 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:37:08.766478 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:37:08.770396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:09.034031 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:09.037404 sudo[1871]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:37:09.038387 sudo[1871]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:37:09.049263 (kubelet)[1876]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:37:09.054039 sudo[1871]: pam_unix(sudo:session): session closed for user root Nov 1 00:37:09.126894 kubelet[1876]: E1101 00:37:09.126774 1876 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:09.130938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:09.131176 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:09.131711 systemd[1]: kubelet.service: Consumed 252ms CPU time, 109.5M memory peak. Nov 1 00:37:09.199606 sshd[1867]: Connection closed by 139.178.89.65 port 47568 Nov 1 00:37:09.200309 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:09.205577 systemd[1]: sshd@4-10.230.36.206:22-139.178.89.65:47568.service: Deactivated successfully. Nov 1 00:37:09.208085 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:37:09.210323 systemd-logind[1630]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:37:09.212316 systemd-logind[1630]: Removed session 7. Nov 1 00:37:09.368377 systemd[1]: Started sshd@5-10.230.36.206:22-139.178.89.65:47584.service - OpenSSH per-connection server daemon (139.178.89.65:47584). Nov 1 00:37:10.291700 sshd[1889]: Accepted publickey for core from 139.178.89.65 port 47584 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:10.293628 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:10.301475 systemd-logind[1630]: New session 8 of user core. Nov 1 00:37:10.309025 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:37:10.775554 sudo[1894]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:37:10.776004 sudo[1894]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:37:10.782899 sudo[1894]: pam_unix(sudo:session): session closed for user root Nov 1 00:37:10.792690 sudo[1893]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 1 00:37:10.793564 sudo[1893]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:37:10.807234 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 00:37:10.867091 augenrules[1916]: No rules Nov 1 00:37:10.868559 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:37:10.868972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 00:37:10.870689 sudo[1893]: pam_unix(sudo:session): session closed for user root Nov 1 00:37:11.016904 sshd[1892]: Connection closed by 139.178.89.65 port 47584 Nov 1 00:37:11.017696 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Nov 1 00:37:11.023621 systemd[1]: sshd@5-10.230.36.206:22-139.178.89.65:47584.service: Deactivated successfully. Nov 1 00:37:11.026067 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:37:11.027433 systemd-logind[1630]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:37:11.029326 systemd-logind[1630]: Removed session 8. Nov 1 00:37:11.176912 systemd[1]: Started sshd@6-10.230.36.206:22-139.178.89.65:47594.service - OpenSSH per-connection server daemon (139.178.89.65:47594). Nov 1 00:37:12.105564 sshd[1925]: Accepted publickey for core from 139.178.89.65 port 47594 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:37:12.107590 sshd-session[1925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:37:12.115525 systemd-logind[1630]: New session 9 of user core. Nov 1 00:37:12.123039 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:37:12.589902 sudo[1929]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:37:12.590325 sudo[1929]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:37:13.118847 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:37:13.149541 (dockerd)[1947]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:37:13.518442 dockerd[1947]: time="2025-11-01T00:37:13.517942767Z" level=info msg="Starting up" Nov 1 00:37:13.519891 dockerd[1947]: time="2025-11-01T00:37:13.519864668Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 1 00:37:13.538197 dockerd[1947]: time="2025-11-01T00:37:13.538153914Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 1 00:37:13.559291 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1800373078-merged.mount: Deactivated successfully. Nov 1 00:37:13.594343 dockerd[1947]: time="2025-11-01T00:37:13.594272948Z" level=info msg="Loading containers: start." Nov 1 00:37:13.609877 kernel: Initializing XFRM netlink socket Nov 1 00:37:13.891434 systemd-timesyncd[1533]: Network configuration changed, trying to establish connection. Nov 1 00:37:13.949971 systemd-networkd[1568]: docker0: Link UP Nov 1 00:37:13.954168 dockerd[1947]: time="2025-11-01T00:37:13.954125687Z" level=info msg="Loading containers: done." Nov 1 00:37:13.976416 dockerd[1947]: time="2025-11-01T00:37:13.973989028Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:37:13.976416 dockerd[1947]: time="2025-11-01T00:37:13.974086179Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 1 00:37:13.976416 dockerd[1947]: time="2025-11-01T00:37:13.974232492Z" level=info msg="Initializing buildkit" Nov 1 00:37:13.976121 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1315995667-merged.mount: Deactivated successfully. Nov 1 00:37:14.007088 dockerd[1947]: time="2025-11-01T00:37:14.007047412Z" level=info msg="Completed buildkit initialization" Nov 1 00:37:14.016038 dockerd[1947]: time="2025-11-01T00:37:14.016008710Z" level=info msg="Daemon has completed initialization" Nov 1 00:37:14.016355 dockerd[1947]: time="2025-11-01T00:37:14.016230957Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:37:14.016606 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:37:14.707767 systemd-timesyncd[1533]: Contacted time server [2a01:7e00::f03c:95ff:fed3:b38b]:123 (2.flatcar.pool.ntp.org). Nov 1 00:37:14.707951 systemd-timesyncd[1533]: Initial clock synchronization to Sat 2025-11-01 00:37:14.707454 UTC. Nov 1 00:37:14.708853 systemd-resolved[1334]: Clock change detected. Flushing caches. Nov 1 00:37:16.078566 containerd[1648]: time="2025-11-01T00:37:16.078199678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:37:17.059554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922421831.mount: Deactivated successfully. Nov 1 00:37:19.039869 containerd[1648]: time="2025-11-01T00:37:19.039730793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:19.042754 containerd[1648]: time="2025-11-01T00:37:19.042650774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Nov 1 00:37:19.043979 containerd[1648]: time="2025-11-01T00:37:19.043945614Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:19.048263 containerd[1648]: time="2025-11-01T00:37:19.048197411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:19.050889 containerd[1648]: time="2025-11-01T00:37:19.050840949Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.972489161s" Nov 1 00:37:19.051000 containerd[1648]: time="2025-11-01T00:37:19.050929714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:37:19.052372 containerd[1648]: time="2025-11-01T00:37:19.052327227Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:37:19.848199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:37:19.852373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:20.026808 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:20.041902 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:37:20.130460 kubelet[2226]: E1101 00:37:20.130049 2226 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:20.134138 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:20.134623 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:20.135572 systemd[1]: kubelet.service: Consumed 217ms CPU time, 110.7M memory peak. Nov 1 00:37:21.645539 containerd[1648]: time="2025-11-01T00:37:21.645029614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:21.647130 containerd[1648]: time="2025-11-01T00:37:21.646833076Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Nov 1 00:37:21.648069 containerd[1648]: time="2025-11-01T00:37:21.648021636Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:21.651529 containerd[1648]: time="2025-11-01T00:37:21.651475544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:21.653077 containerd[1648]: time="2025-11-01T00:37:21.653040374Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.600588533s" Nov 1 00:37:21.653252 containerd[1648]: time="2025-11-01T00:37:21.653220983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:37:21.654580 containerd[1648]: time="2025-11-01T00:37:21.654389422Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:37:24.859510 containerd[1648]: time="2025-11-01T00:37:24.859396054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:24.861723 containerd[1648]: time="2025-11-01T00:37:24.861686864Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Nov 1 00:37:24.863624 containerd[1648]: time="2025-11-01T00:37:24.863567887Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:24.866548 containerd[1648]: time="2025-11-01T00:37:24.866509474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:24.868227 containerd[1648]: time="2025-11-01T00:37:24.868163335Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 3.213490702s" Nov 1 00:37:24.868227 containerd[1648]: time="2025-11-01T00:37:24.868205724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:37:24.868965 containerd[1648]: time="2025-11-01T00:37:24.868757179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:37:27.824128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount557240645.mount: Deactivated successfully. Nov 1 00:37:28.590458 containerd[1648]: time="2025-11-01T00:37:28.589594326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:28.591608 containerd[1648]: time="2025-11-01T00:37:28.591579904Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Nov 1 00:37:28.593656 containerd[1648]: time="2025-11-01T00:37:28.593613005Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:28.596591 containerd[1648]: time="2025-11-01T00:37:28.596556443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:28.597448 containerd[1648]: time="2025-11-01T00:37:28.597408362Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.728618439s" Nov 1 00:37:28.597538 containerd[1648]: time="2025-11-01T00:37:28.597450619Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:37:28.598327 containerd[1648]: time="2025-11-01T00:37:28.598296054Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:37:28.874027 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 00:37:29.342072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325722754.mount: Deactivated successfully. Nov 1 00:37:30.348511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:37:30.352853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:30.828559 containerd[1648]: time="2025-11-01T00:37:30.827907661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:30.830560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:30.832562 containerd[1648]: time="2025-11-01T00:37:30.831411604Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 1 00:37:30.834399 containerd[1648]: time="2025-11-01T00:37:30.834362361Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:30.842954 containerd[1648]: time="2025-11-01T00:37:30.842442052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:30.844537 containerd[1648]: time="2025-11-01T00:37:30.843543296Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.245207399s" Nov 1 00:37:30.844537 containerd[1648]: time="2025-11-01T00:37:30.843586763Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:37:30.844677 containerd[1648]: time="2025-11-01T00:37:30.844630658Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:37:30.846893 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:37:30.963829 kubelet[2312]: E1101 00:37:30.963741 2312 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:30.966198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:30.966437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:30.966956 systemd[1]: kubelet.service: Consumed 216ms CPU time, 107.8M memory peak. Nov 1 00:37:32.019405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294339145.mount: Deactivated successfully. Nov 1 00:37:32.026553 containerd[1648]: time="2025-11-01T00:37:32.025503458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:37:32.026553 containerd[1648]: time="2025-11-01T00:37:32.026515166Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 1 00:37:32.027304 containerd[1648]: time="2025-11-01T00:37:32.027265751Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:37:32.030022 containerd[1648]: time="2025-11-01T00:37:32.029979909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:37:32.031109 containerd[1648]: time="2025-11-01T00:37:32.031075411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.186412403s" Nov 1 00:37:32.031249 containerd[1648]: time="2025-11-01T00:37:32.031223115Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:37:32.032273 containerd[1648]: time="2025-11-01T00:37:32.032230693Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:37:32.882999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906620216.mount: Deactivated successfully. Nov 1 00:37:38.645520 containerd[1648]: time="2025-11-01T00:37:38.643668011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:38.645520 containerd[1648]: time="2025-11-01T00:37:38.645214503Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:38.645520 containerd[1648]: time="2025-11-01T00:37:38.645294044Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 1 00:37:38.650285 containerd[1648]: time="2025-11-01T00:37:38.650222683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:38.652261 containerd[1648]: time="2025-11-01T00:37:38.651776823Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.619379569s" Nov 1 00:37:38.652261 containerd[1648]: time="2025-11-01T00:37:38.651838003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:37:41.098551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:37:41.103633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:41.295758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:41.305069 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:37:41.417443 kubelet[2403]: E1101 00:37:41.417274 2403 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:37:41.421126 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:37:41.421500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:37:41.422574 systemd[1]: kubelet.service: Consumed 208ms CPU time, 107.7M memory peak. Nov 1 00:37:41.463819 update_engine[1631]: I20251101 00:37:41.463660 1631 update_attempter.cc:509] Updating boot flags... Nov 1 00:37:42.189798 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:42.190106 systemd[1]: kubelet.service: Consumed 208ms CPU time, 107.7M memory peak. Nov 1 00:37:42.193211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:42.230735 systemd[1]: Reload requested from client PID 2433 ('systemctl') (unit session-9.scope)... Nov 1 00:37:42.230780 systemd[1]: Reloading... Nov 1 00:37:42.391517 zram_generator::config[2479]: No configuration found. Nov 1 00:37:42.743893 systemd[1]: Reloading finished in 512 ms. Nov 1 00:37:42.845058 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:37:42.845391 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:37:42.845981 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:42.846182 systemd[1]: kubelet.service: Consumed 140ms CPU time, 98.6M memory peak. Nov 1 00:37:42.848629 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:43.026126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:43.038184 (kubelet)[2546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:37:43.168344 kubelet[2546]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:43.168908 kubelet[2546]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:37:43.169027 kubelet[2546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:43.169251 kubelet[2546]: I1101 00:37:43.169211 2546 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:37:43.746666 kubelet[2546]: I1101 00:37:43.746599 2546 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:37:43.746845 kubelet[2546]: I1101 00:37:43.746827 2546 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:37:43.747291 kubelet[2546]: I1101 00:37:43.747268 2546 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:37:43.952196 kubelet[2546]: I1101 00:37:43.952140 2546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:37:43.955104 kubelet[2546]: E1101 00:37:43.954989 2546 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.36.206:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:43.981803 kubelet[2546]: I1101 00:37:43.981774 2546 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 00:37:43.990436 kubelet[2546]: I1101 00:37:43.990406 2546 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:37:43.993057 kubelet[2546]: I1101 00:37:43.992652 2546 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:37:43.993057 kubelet[2546]: I1101 00:37:43.992705 2546 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-nthov.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:37:43.995664 kubelet[2546]: I1101 00:37:43.995622 2546 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:37:43.995786 kubelet[2546]: I1101 00:37:43.995768 2546 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:37:43.997800 kubelet[2546]: I1101 00:37:43.997220 2546 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:44.001840 kubelet[2546]: I1101 00:37:44.001817 2546 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:37:44.001998 kubelet[2546]: I1101 00:37:44.001977 2546 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:37:44.003998 kubelet[2546]: I1101 00:37:44.003975 2546 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:37:44.004130 kubelet[2546]: I1101 00:37:44.004110 2546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:37:44.008135 kubelet[2546]: W1101 00:37:44.008074 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.36.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nthov.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:44.008220 kubelet[2546]: E1101 00:37:44.008147 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.36.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nthov.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:44.010540 kubelet[2546]: I1101 00:37:44.009540 2546 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 1 00:37:44.013403 kubelet[2546]: I1101 00:37:44.013274 2546 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:37:44.013403 kubelet[2546]: W1101 00:37:44.013392 2546 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:37:44.017956 kubelet[2546]: W1101 00:37:44.017913 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.36.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:44.018101 kubelet[2546]: E1101 00:37:44.018073 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.36.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:44.018574 kubelet[2546]: I1101 00:37:44.018550 2546 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:37:44.018740 kubelet[2546]: I1101 00:37:44.018720 2546 server.go:1287] "Started kubelet" Nov 1 00:37:44.020969 kubelet[2546]: I1101 00:37:44.020709 2546 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:37:44.022143 kubelet[2546]: I1101 00:37:44.021999 2546 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:37:44.025852 kubelet[2546]: I1101 00:37:44.025815 2546 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:37:44.026287 kubelet[2546]: I1101 00:37:44.026265 2546 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:37:44.026580 kubelet[2546]: I1101 00:37:44.026533 2546 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:37:44.027574 kubelet[2546]: I1101 00:37:44.026374 2546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:37:44.029827 kubelet[2546]: I1101 00:37:44.029801 2546 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:37:44.030189 kubelet[2546]: I1101 00:37:44.030155 2546 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:37:44.030258 kubelet[2546]: I1101 00:37:44.030232 2546 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:37:44.040529 kubelet[2546]: E1101 00:37:44.034412 2546 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.36.206:6443/api/v1/namespaces/default/events\": dial tcp 10.230.36.206:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-nthov.gb1.brightbox.com.1873bafa702ea477 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-nthov.gb1.brightbox.com,UID:srv-nthov.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-nthov.gb1.brightbox.com,},FirstTimestamp:2025-11-01 00:37:44.018691191 +0000 UTC m=+0.975547825,LastTimestamp:2025-11-01 00:37:44.018691191 +0000 UTC m=+0.975547825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-nthov.gb1.brightbox.com,}" Nov 1 00:37:44.040709 kubelet[2546]: W1101 00:37:44.040607 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.36.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:44.040709 kubelet[2546]: E1101 00:37:44.040684 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.36.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:44.041735 kubelet[2546]: E1101 00:37:44.041240 2546 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-nthov.gb1.brightbox.com\" not found" Nov 1 00:37:44.043032 kubelet[2546]: E1101 00:37:44.042989 2546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nthov.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.206:6443: connect: connection refused" interval="200ms" Nov 1 00:37:44.048898 kubelet[2546]: I1101 00:37:44.048870 2546 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:37:44.051288 kubelet[2546]: I1101 00:37:44.051264 2546 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:37:44.051383 kubelet[2546]: I1101 00:37:44.051366 2546 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:37:44.076089 kubelet[2546]: I1101 00:37:44.076007 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:37:44.081507 kubelet[2546]: I1101 00:37:44.081234 2546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:37:44.081507 kubelet[2546]: I1101 00:37:44.081282 2546 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:37:44.081507 kubelet[2546]: I1101 00:37:44.081321 2546 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:37:44.081507 kubelet[2546]: I1101 00:37:44.081334 2546 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:37:44.081507 kubelet[2546]: E1101 00:37:44.081426 2546 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:37:44.082603 kubelet[2546]: W1101 00:37:44.082553 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.36.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:44.082702 kubelet[2546]: E1101 00:37:44.082636 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.36.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:44.086458 kubelet[2546]: I1101 00:37:44.086427 2546 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:37:44.086458 kubelet[2546]: I1101 00:37:44.086451 2546 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:37:44.086806 kubelet[2546]: I1101 00:37:44.086561 2546 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:44.090123 kubelet[2546]: I1101 00:37:44.090071 2546 policy_none.go:49] "None policy: Start" Nov 1 00:37:44.090123 kubelet[2546]: I1101 00:37:44.090114 2546 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:37:44.090266 kubelet[2546]: I1101 00:37:44.090143 2546 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:37:44.099839 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:37:44.117634 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:37:44.122967 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:37:44.136300 kubelet[2546]: I1101 00:37:44.135983 2546 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:37:44.138140 kubelet[2546]: I1101 00:37:44.137078 2546 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:37:44.138140 kubelet[2546]: I1101 00:37:44.137131 2546 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:37:44.138140 kubelet[2546]: I1101 00:37:44.137753 2546 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:37:44.140485 kubelet[2546]: E1101 00:37:44.140456 2546 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:37:44.140655 kubelet[2546]: E1101 00:37:44.140598 2546 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-nthov.gb1.brightbox.com\" not found" Nov 1 00:37:44.201675 systemd[1]: Created slice kubepods-burstable-pod7ea621cf20bfb26c5b0fcb1e69befd29.slice - libcontainer container kubepods-burstable-pod7ea621cf20bfb26c5b0fcb1e69befd29.slice. Nov 1 00:37:44.226134 kubelet[2546]: E1101 00:37:44.225877 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.231247 systemd[1]: Created slice kubepods-burstable-podbe5b39ede1078b7954710175f3c12aeb.slice - libcontainer container kubepods-burstable-podbe5b39ede1078b7954710175f3c12aeb.slice. Nov 1 00:37:44.231945 kubelet[2546]: I1101 00:37:44.231917 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.232135 kubelet[2546]: I1101 00:37:44.232098 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ea621cf20bfb26c5b0fcb1e69befd29-ca-certs\") pod \"kube-apiserver-srv-nthov.gb1.brightbox.com\" (UID: \"7ea621cf20bfb26c5b0fcb1e69befd29\") " pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.232277 kubelet[2546]: I1101 00:37:44.232254 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ea621cf20bfb26c5b0fcb1e69befd29-usr-share-ca-certificates\") pod \"kube-apiserver-srv-nthov.gb1.brightbox.com\" (UID: \"7ea621cf20bfb26c5b0fcb1e69befd29\") " pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.233040 kubelet[2546]: I1101 00:37:44.232992 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-flexvolume-dir\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.233233 kubelet[2546]: I1101 00:37:44.233178 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-kubeconfig\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.233356 kubelet[2546]: I1101 00:37:44.233333 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb6253cf1ea478a4e6ae4e9c7909cd89-kubeconfig\") pod \"kube-scheduler-srv-nthov.gb1.brightbox.com\" (UID: \"eb6253cf1ea478a4e6ae4e9c7909cd89\") " pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.233560 kubelet[2546]: I1101 00:37:44.233420 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ea621cf20bfb26c5b0fcb1e69befd29-k8s-certs\") pod \"kube-apiserver-srv-nthov.gb1.brightbox.com\" (UID: \"7ea621cf20bfb26c5b0fcb1e69befd29\") " pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.233560 kubelet[2546]: I1101 00:37:44.233449 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-ca-certs\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.233949 kubelet[2546]: I1101 00:37:44.233804 2546 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-k8s-certs\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.235158 kubelet[2546]: E1101 00:37:44.235131 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.239985 systemd[1]: Created slice kubepods-burstable-podeb6253cf1ea478a4e6ae4e9c7909cd89.slice - libcontainer container kubepods-burstable-podeb6253cf1ea478a4e6ae4e9c7909cd89.slice. Nov 1 00:37:44.241373 kubelet[2546]: I1101 00:37:44.240347 2546 kubelet_node_status.go:75] "Attempting to register node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.241373 kubelet[2546]: E1101 00:37:44.240822 2546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.206:6443/api/v1/nodes\": dial tcp 10.230.36.206:6443: connect: connection refused" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.242900 kubelet[2546]: E1101 00:37:44.242877 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.243867 kubelet[2546]: E1101 00:37:44.243836 2546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nthov.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.206:6443: connect: connection refused" interval="400ms" Nov 1 00:37:44.444025 kubelet[2546]: I1101 00:37:44.443919 2546 kubelet_node_status.go:75] "Attempting to register node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.444750 kubelet[2546]: E1101 00:37:44.444710 2546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.206:6443/api/v1/nodes\": dial tcp 10.230.36.206:6443: connect: connection refused" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.528896 containerd[1648]: time="2025-11-01T00:37:44.528827007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-nthov.gb1.brightbox.com,Uid:7ea621cf20bfb26c5b0fcb1e69befd29,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:44.536552 containerd[1648]: time="2025-11-01T00:37:44.536501746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-nthov.gb1.brightbox.com,Uid:be5b39ede1078b7954710175f3c12aeb,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:44.545364 containerd[1648]: time="2025-11-01T00:37:44.545103324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-nthov.gb1.brightbox.com,Uid:eb6253cf1ea478a4e6ae4e9c7909cd89,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:44.645602 kubelet[2546]: E1101 00:37:44.645244 2546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nthov.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.206:6443: connect: connection refused" interval="800ms" Nov 1 00:37:44.692277 containerd[1648]: time="2025-11-01T00:37:44.692202446Z" level=info msg="connecting to shim 4d5256d1ec770e9bdb3c5bb9a03fd1e1b64ce2a68e1464f1d7d2b840eceaecb7" address="unix:///run/containerd/s/6df4514f7cade81e67fc8d70e737cc82810533b823507f91c89cda462985056e" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:37:44.696316 containerd[1648]: time="2025-11-01T00:37:44.695710890Z" level=info msg="connecting to shim e0aa4aae4769309ce198dd4145d6dcb23dae9bae981d97983d8695f53a248181" address="unix:///run/containerd/s/7b75e12ab3a884bfdf93d256f422e9f0a05db6f43c8131edb6b3e1973e9076c6" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:37:44.703756 containerd[1648]: time="2025-11-01T00:37:44.703640074Z" level=info msg="connecting to shim 0749f8912f0deca85072dcca3e5b08399d75ea6de903c7766403dbad5c3cfdd8" address="unix:///run/containerd/s/3e35e6d3d3b80405951d4293ce1af28538f0ed0b72afc7fde99c70e890ce4823" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:37:44.830900 systemd[1]: Started cri-containerd-0749f8912f0deca85072dcca3e5b08399d75ea6de903c7766403dbad5c3cfdd8.scope - libcontainer container 0749f8912f0deca85072dcca3e5b08399d75ea6de903c7766403dbad5c3cfdd8. Nov 1 00:37:44.849785 kubelet[2546]: I1101 00:37:44.849284 2546 kubelet_node_status.go:75] "Attempting to register node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.849785 kubelet[2546]: E1101 00:37:44.849741 2546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.206:6443/api/v1/nodes\": dial tcp 10.230.36.206:6443: connect: connection refused" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:44.853904 systemd[1]: Started cri-containerd-4d5256d1ec770e9bdb3c5bb9a03fd1e1b64ce2a68e1464f1d7d2b840eceaecb7.scope - libcontainer container 4d5256d1ec770e9bdb3c5bb9a03fd1e1b64ce2a68e1464f1d7d2b840eceaecb7. Nov 1 00:37:44.857503 systemd[1]: Started cri-containerd-e0aa4aae4769309ce198dd4145d6dcb23dae9bae981d97983d8695f53a248181.scope - libcontainer container e0aa4aae4769309ce198dd4145d6dcb23dae9bae981d97983d8695f53a248181. Nov 1 00:37:44.905998 kubelet[2546]: W1101 00:37:44.905907 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.36.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:44.906531 kubelet[2546]: E1101 00:37:44.906014 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.36.206:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:44.933271 kubelet[2546]: W1101 00:37:44.933183 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.36.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:44.933499 kubelet[2546]: E1101 00:37:44.933284 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.36.206:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:44.981862 containerd[1648]: time="2025-11-01T00:37:44.980179962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-nthov.gb1.brightbox.com,Uid:be5b39ede1078b7954710175f3c12aeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0aa4aae4769309ce198dd4145d6dcb23dae9bae981d97983d8695f53a248181\"" Nov 1 00:37:44.991081 containerd[1648]: time="2025-11-01T00:37:44.991039884Z" level=info msg="CreateContainer within sandbox \"e0aa4aae4769309ce198dd4145d6dcb23dae9bae981d97983d8695f53a248181\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:37:44.997152 containerd[1648]: time="2025-11-01T00:37:44.997111542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-nthov.gb1.brightbox.com,Uid:7ea621cf20bfb26c5b0fcb1e69befd29,Namespace:kube-system,Attempt:0,} returns sandbox id \"0749f8912f0deca85072dcca3e5b08399d75ea6de903c7766403dbad5c3cfdd8\"" Nov 1 00:37:45.005737 containerd[1648]: time="2025-11-01T00:37:45.005689757Z" level=info msg="CreateContainer within sandbox \"0749f8912f0deca85072dcca3e5b08399d75ea6de903c7766403dbad5c3cfdd8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:37:45.009770 kubelet[2546]: W1101 00:37:45.009720 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.36.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:45.009877 kubelet[2546]: E1101 00:37:45.009796 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.36.206:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:45.022025 containerd[1648]: time="2025-11-01T00:37:45.021632413Z" level=info msg="Container a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:37:45.038451 containerd[1648]: time="2025-11-01T00:37:45.038405230Z" level=info msg="Container 92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:37:45.051692 containerd[1648]: time="2025-11-01T00:37:45.051649804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-nthov.gb1.brightbox.com,Uid:eb6253cf1ea478a4e6ae4e9c7909cd89,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d5256d1ec770e9bdb3c5bb9a03fd1e1b64ce2a68e1464f1d7d2b840eceaecb7\"" Nov 1 00:37:45.052823 containerd[1648]: time="2025-11-01T00:37:45.052767915Z" level=info msg="CreateContainer within sandbox \"e0aa4aae4769309ce198dd4145d6dcb23dae9bae981d97983d8695f53a248181\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527\"" Nov 1 00:37:45.055752 containerd[1648]: time="2025-11-01T00:37:45.055721056Z" level=info msg="StartContainer for \"a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527\"" Nov 1 00:37:45.057127 containerd[1648]: time="2025-11-01T00:37:45.057094921Z" level=info msg="CreateContainer within sandbox \"4d5256d1ec770e9bdb3c5bb9a03fd1e1b64ce2a68e1464f1d7d2b840eceaecb7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:37:45.061422 containerd[1648]: time="2025-11-01T00:37:45.061293284Z" level=info msg="CreateContainer within sandbox \"0749f8912f0deca85072dcca3e5b08399d75ea6de903c7766403dbad5c3cfdd8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55\"" Nov 1 00:37:45.061801 containerd[1648]: time="2025-11-01T00:37:45.061769331Z" level=info msg="connecting to shim a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527" address="unix:///run/containerd/s/7b75e12ab3a884bfdf93d256f422e9f0a05db6f43c8131edb6b3e1973e9076c6" protocol=ttrpc version=3 Nov 1 00:37:45.063547 containerd[1648]: time="2025-11-01T00:37:45.062456728Z" level=info msg="StartContainer for \"92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55\"" Nov 1 00:37:45.065177 containerd[1648]: time="2025-11-01T00:37:45.065143810Z" level=info msg="connecting to shim 92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55" address="unix:///run/containerd/s/3e35e6d3d3b80405951d4293ce1af28538f0ed0b72afc7fde99c70e890ce4823" protocol=ttrpc version=3 Nov 1 00:37:45.072714 containerd[1648]: time="2025-11-01T00:37:45.072673019Z" level=info msg="Container 0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:37:45.090272 containerd[1648]: time="2025-11-01T00:37:45.090195422Z" level=info msg="CreateContainer within sandbox \"4d5256d1ec770e9bdb3c5bb9a03fd1e1b64ce2a68e1464f1d7d2b840eceaecb7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c\"" Nov 1 00:37:45.091177 containerd[1648]: time="2025-11-01T00:37:45.091145695Z" level=info msg="StartContainer for \"0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c\"" Nov 1 00:37:45.095310 containerd[1648]: time="2025-11-01T00:37:45.095276200Z" level=info msg="connecting to shim 0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c" address="unix:///run/containerd/s/6df4514f7cade81e67fc8d70e737cc82810533b823507f91c89cda462985056e" protocol=ttrpc version=3 Nov 1 00:37:45.104421 systemd[1]: Started cri-containerd-92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55.scope - libcontainer container 92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55. Nov 1 00:37:45.124119 systemd[1]: Started cri-containerd-a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527.scope - libcontainer container a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527. Nov 1 00:37:45.149758 systemd[1]: Started cri-containerd-0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c.scope - libcontainer container 0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c. Nov 1 00:37:45.277898 containerd[1648]: time="2025-11-01T00:37:45.277627490Z" level=info msg="StartContainer for \"0712536743eb89043312ab35e0e921ce9862d8d4bd6a4b7dcdd40673e742761c\" returns successfully" Nov 1 00:37:45.277898 containerd[1648]: time="2025-11-01T00:37:45.277872866Z" level=info msg="StartContainer for \"92324437b0f1e599217eef4f739ab7ea71d6eb23f919f9f2b6097a04041e2d55\" returns successfully" Nov 1 00:37:45.296471 containerd[1648]: time="2025-11-01T00:37:45.296327090Z" level=info msg="StartContainer for \"a835d3727142511bfa0d45b89ef6a14c35cd6a4b96ac7ceede21b298bb0ee527\" returns successfully" Nov 1 00:37:45.317246 kubelet[2546]: W1101 00:37:45.317169 2546 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.36.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nthov.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.36.206:6443: connect: connection refused Nov 1 00:37:45.318241 kubelet[2546]: E1101 00:37:45.317253 2546 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.36.206:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-nthov.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.36.206:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:37:45.447204 kubelet[2546]: E1101 00:37:45.447133 2546 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.36.206:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-nthov.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.36.206:6443: connect: connection refused" interval="1.6s" Nov 1 00:37:45.654093 kubelet[2546]: I1101 00:37:45.654052 2546 kubelet_node_status.go:75] "Attempting to register node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:45.654499 kubelet[2546]: E1101 00:37:45.654449 2546 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.36.206:6443/api/v1/nodes\": dial tcp 10.230.36.206:6443: connect: connection refused" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:46.126923 kubelet[2546]: E1101 00:37:46.126534 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:46.128810 kubelet[2546]: E1101 00:37:46.128783 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:46.134029 kubelet[2546]: E1101 00:37:46.134007 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:47.136662 kubelet[2546]: E1101 00:37:47.136582 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:47.139043 kubelet[2546]: E1101 00:37:47.138692 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:47.139382 kubelet[2546]: E1101 00:37:47.139358 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:47.258363 kubelet[2546]: I1101 00:37:47.257962 2546 kubelet_node_status.go:75] "Attempting to register node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.139967 kubelet[2546]: E1101 00:37:48.139911 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.140749 kubelet[2546]: E1101 00:37:48.140298 2546 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.244885 kubelet[2546]: E1101 00:37:48.244811 2546 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-nthov.gb1.brightbox.com\" not found" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.411799 kubelet[2546]: I1101 00:37:48.411083 2546 kubelet_node_status.go:78] "Successfully registered node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.442347 kubelet[2546]: I1101 00:37:48.442294 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.452534 kubelet[2546]: E1101 00:37:48.452469 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-nthov.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.452534 kubelet[2546]: I1101 00:37:48.452531 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.454493 kubelet[2546]: E1101 00:37:48.454444 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.454606 kubelet[2546]: I1101 00:37:48.454499 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:48.456303 kubelet[2546]: E1101 00:37:48.456266 2546 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-nthov.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:49.019747 kubelet[2546]: I1101 00:37:49.019660 2546 apiserver.go:52] "Watching apiserver" Nov 1 00:37:49.030910 kubelet[2546]: I1101 00:37:49.030811 2546 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:37:49.140910 kubelet[2546]: I1101 00:37:49.140670 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:49.150520 kubelet[2546]: W1101 00:37:49.150394 2546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:50.122999 kubelet[2546]: I1101 00:37:50.122950 2546 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:50.131504 kubelet[2546]: W1101 00:37:50.131308 2546 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:50.659085 systemd[1]: Reload requested from client PID 2821 ('systemctl') (unit session-9.scope)... Nov 1 00:37:50.659148 systemd[1]: Reloading... Nov 1 00:37:50.865511 zram_generator::config[2873]: No configuration found. Nov 1 00:37:51.243713 systemd[1]: Reloading finished in 583 ms. Nov 1 00:37:51.285402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:51.297221 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:37:51.297691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:51.297789 systemd[1]: kubelet.service: Consumed 1.303s CPU time, 127.7M memory peak. Nov 1 00:37:51.302368 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:37:51.590918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:37:51.601928 (kubelet)[2931]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:37:51.699161 kubelet[2931]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:51.699161 kubelet[2931]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:37:51.700521 kubelet[2931]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:37:51.700521 kubelet[2931]: I1101 00:37:51.699844 2931 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:37:51.714037 kubelet[2931]: I1101 00:37:51.713992 2931 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:37:51.714037 kubelet[2931]: I1101 00:37:51.714027 2931 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:37:51.714355 kubelet[2931]: I1101 00:37:51.714331 2931 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:37:51.719591 kubelet[2931]: I1101 00:37:51.719563 2931 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:37:51.733858 kubelet[2931]: I1101 00:37:51.733678 2931 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:37:51.745142 kubelet[2931]: I1101 00:37:51.745114 2931 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 00:37:51.755503 kubelet[2931]: I1101 00:37:51.755032 2931 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:37:51.755503 kubelet[2931]: I1101 00:37:51.755386 2931 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:37:51.755903 kubelet[2931]: I1101 00:37:51.755428 2931 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-nthov.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:37:51.756123 kubelet[2931]: I1101 00:37:51.756102 2931 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:37:51.756228 kubelet[2931]: I1101 00:37:51.756212 2931 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:37:51.756423 kubelet[2931]: I1101 00:37:51.756404 2931 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:51.756802 kubelet[2931]: I1101 00:37:51.756756 2931 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:37:51.758000 kubelet[2931]: I1101 00:37:51.757765 2931 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:37:51.758000 kubelet[2931]: I1101 00:37:51.757817 2931 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:37:51.758000 kubelet[2931]: I1101 00:37:51.757835 2931 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:37:51.761507 kubelet[2931]: I1101 00:37:51.760524 2931 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 1 00:37:51.761507 kubelet[2931]: I1101 00:37:51.761050 2931 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:37:51.762631 kubelet[2931]: I1101 00:37:51.762610 2931 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:37:51.763680 kubelet[2931]: I1101 00:37:51.763658 2931 server.go:1287] "Started kubelet" Nov 1 00:37:51.787553 kubelet[2931]: I1101 00:37:51.787523 2931 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:37:51.793725 kubelet[2931]: I1101 00:37:51.793684 2931 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:37:51.798696 kubelet[2931]: I1101 00:37:51.798508 2931 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:37:51.825832 kubelet[2931]: I1101 00:37:51.825797 2931 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:37:51.826171 kubelet[2931]: I1101 00:37:51.801083 2931 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:37:51.835801 kubelet[2931]: I1101 00:37:51.835775 2931 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:37:51.836512 kubelet[2931]: I1101 00:37:51.836108 2931 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:37:51.836947 kubelet[2931]: I1101 00:37:51.800881 2931 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:37:51.837356 kubelet[2931]: I1101 00:37:51.801240 2931 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:37:51.838053 kubelet[2931]: I1101 00:37:51.837726 2931 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:37:51.849581 kubelet[2931]: E1101 00:37:51.803752 2931 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-nthov.gb1.brightbox.com\" not found" Nov 1 00:37:51.855753 kubelet[2931]: I1101 00:37:51.811969 2931 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:37:51.867147 kubelet[2931]: I1101 00:37:51.867037 2931 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:37:51.892118 kubelet[2931]: E1101 00:37:51.892041 2931 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:37:51.907747 kubelet[2931]: I1101 00:37:51.907691 2931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:37:51.918015 kubelet[2931]: I1101 00:37:51.917652 2931 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:37:51.918015 kubelet[2931]: I1101 00:37:51.917698 2931 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:37:51.918015 kubelet[2931]: I1101 00:37:51.917722 2931 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:37:51.918015 kubelet[2931]: I1101 00:37:51.917774 2931 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:37:51.918015 kubelet[2931]: E1101 00:37:51.917857 2931 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:37:51.994317 kubelet[2931]: I1101 00:37:51.994171 2931 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:37:51.994568 kubelet[2931]: I1101 00:37:51.994543 2931 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:37:51.995438 kubelet[2931]: I1101 00:37:51.995418 2931 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:37:51.997153 kubelet[2931]: I1101 00:37:51.995837 2931 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:37:51.997338 kubelet[2931]: I1101 00:37:51.997262 2931 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:37:51.997446 kubelet[2931]: I1101 00:37:51.997428 2931 policy_none.go:49] "None policy: Start" Nov 1 00:37:51.997569 kubelet[2931]: I1101 00:37:51.997543 2931 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:37:51.997635 kubelet[2931]: I1101 00:37:51.997576 2931 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:37:51.998075 kubelet[2931]: I1101 00:37:51.997761 2931 state_mem.go:75] "Updated machine memory state" Nov 1 00:37:52.007338 kubelet[2931]: I1101 00:37:52.006532 2931 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:37:52.008861 kubelet[2931]: I1101 00:37:52.008648 2931 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:37:52.009504 kubelet[2931]: I1101 00:37:52.009312 2931 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:37:52.011214 kubelet[2931]: I1101 00:37:52.011172 2931 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:37:52.018813 kubelet[2931]: I1101 00:37:52.018456 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.025634 kubelet[2931]: I1101 00:37:52.025583 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.028460 kubelet[2931]: E1101 00:37:52.022651 2931 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:37:52.030149 kubelet[2931]: I1101 00:37:52.029648 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.039004 kubelet[2931]: W1101 00:37:52.038967 2931 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:52.042786 kubelet[2931]: I1101 00:37:52.042727 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ea621cf20bfb26c5b0fcb1e69befd29-ca-certs\") pod \"kube-apiserver-srv-nthov.gb1.brightbox.com\" (UID: \"7ea621cf20bfb26c5b0fcb1e69befd29\") " pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.043565 kubelet[2931]: I1101 00:37:52.043127 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ea621cf20bfb26c5b0fcb1e69befd29-k8s-certs\") pod \"kube-apiserver-srv-nthov.gb1.brightbox.com\" (UID: \"7ea621cf20bfb26c5b0fcb1e69befd29\") " pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.043565 kubelet[2931]: I1101 00:37:52.043225 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-k8s-certs\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.045256 kubelet[2931]: I1101 00:37:52.045020 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-kubeconfig\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.045256 kubelet[2931]: I1101 00:37:52.045192 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.045256 kubelet[2931]: I1101 00:37:52.045252 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb6253cf1ea478a4e6ae4e9c7909cd89-kubeconfig\") pod \"kube-scheduler-srv-nthov.gb1.brightbox.com\" (UID: \"eb6253cf1ea478a4e6ae4e9c7909cd89\") " pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.045412 kubelet[2931]: I1101 00:37:52.045287 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ea621cf20bfb26c5b0fcb1e69befd29-usr-share-ca-certificates\") pod \"kube-apiserver-srv-nthov.gb1.brightbox.com\" (UID: \"7ea621cf20bfb26c5b0fcb1e69befd29\") " pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.045412 kubelet[2931]: I1101 00:37:52.045315 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-ca-certs\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.045412 kubelet[2931]: I1101 00:37:52.045340 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be5b39ede1078b7954710175f3c12aeb-flexvolume-dir\") pod \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" (UID: \"be5b39ede1078b7954710175f3c12aeb\") " pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.055506 kubelet[2931]: W1101 00:37:52.052722 2931 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:52.055506 kubelet[2931]: E1101 00:37:52.052993 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-nthov.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.058191 kubelet[2931]: W1101 00:37:52.058153 2931 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:52.058317 kubelet[2931]: E1101 00:37:52.058253 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-nthov.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.153719 kubelet[2931]: I1101 00:37:52.152920 2931 kubelet_node_status.go:75] "Attempting to register node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.165705 kubelet[2931]: I1101 00:37:52.165594 2931 kubelet_node_status.go:124] "Node was previously registered" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.165824 kubelet[2931]: I1101 00:37:52.165769 2931 kubelet_node_status.go:78] "Successfully registered node" node="srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.760802 kubelet[2931]: I1101 00:37:52.760680 2931 apiserver.go:52] "Watching apiserver" Nov 1 00:37:52.838087 kubelet[2931]: I1101 00:37:52.838014 2931 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:37:52.951343 kubelet[2931]: I1101 00:37:52.950415 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.952870 kubelet[2931]: I1101 00:37:52.950787 2931 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.966054 kubelet[2931]: W1101 00:37:52.965890 2931 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:52.966809 kubelet[2931]: E1101 00:37:52.966753 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-nthov.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.967418 kubelet[2931]: W1101 00:37:52.967214 2931 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 00:37:52.967418 kubelet[2931]: E1101 00:37:52.967253 2931 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-nthov.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" Nov 1 00:37:52.998330 kubelet[2931]: I1101 00:37:52.998255 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-nthov.gb1.brightbox.com" podStartSLOduration=2.9982247380000002 podStartE2EDuration="2.998224738s" podCreationTimestamp="2025-11-01 00:37:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:52.989289644 +0000 UTC m=+1.373924053" watchObservedRunningTime="2025-11-01 00:37:52.998224738 +0000 UTC m=+1.382859147" Nov 1 00:37:53.009146 kubelet[2931]: I1101 00:37:53.009079 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-nthov.gb1.brightbox.com" podStartSLOduration=1.009066062 podStartE2EDuration="1.009066062s" podCreationTimestamp="2025-11-01 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:52.998588965 +0000 UTC m=+1.383223372" watchObservedRunningTime="2025-11-01 00:37:53.009066062 +0000 UTC m=+1.393700474" Nov 1 00:37:53.019439 kubelet[2931]: I1101 00:37:53.019310 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-nthov.gb1.brightbox.com" podStartSLOduration=4.019298957 podStartE2EDuration="4.019298957s" podCreationTimestamp="2025-11-01 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:53.009258759 +0000 UTC m=+1.393893154" watchObservedRunningTime="2025-11-01 00:37:53.019298957 +0000 UTC m=+1.403933351" Nov 1 00:37:55.244807 kubelet[2931]: I1101 00:37:55.244704 2931 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:37:55.246149 kubelet[2931]: I1101 00:37:55.245556 2931 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:37:55.246211 containerd[1648]: time="2025-11-01T00:37:55.245258875Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:37:55.900917 systemd[1]: Created slice kubepods-besteffort-pod46ad43f6_1cad_42ca_a698_1cb5cd2771a4.slice - libcontainer container kubepods-besteffort-pod46ad43f6_1cad_42ca_a698_1cb5cd2771a4.slice. Nov 1 00:37:55.968950 kubelet[2931]: I1101 00:37:55.968847 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46ad43f6-1cad-42ca-a698-1cb5cd2771a4-xtables-lock\") pod \"kube-proxy-czjhk\" (UID: \"46ad43f6-1cad-42ca-a698-1cb5cd2771a4\") " pod="kube-system/kube-proxy-czjhk" Nov 1 00:37:55.969370 kubelet[2931]: I1101 00:37:55.969223 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46ad43f6-1cad-42ca-a698-1cb5cd2771a4-lib-modules\") pod \"kube-proxy-czjhk\" (UID: \"46ad43f6-1cad-42ca-a698-1cb5cd2771a4\") " pod="kube-system/kube-proxy-czjhk" Nov 1 00:37:55.969370 kubelet[2931]: I1101 00:37:55.969316 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46ad43f6-1cad-42ca-a698-1cb5cd2771a4-kube-proxy\") pod \"kube-proxy-czjhk\" (UID: \"46ad43f6-1cad-42ca-a698-1cb5cd2771a4\") " pod="kube-system/kube-proxy-czjhk" Nov 1 00:37:55.969628 kubelet[2931]: I1101 00:37:55.969349 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tls\" (UniqueName: \"kubernetes.io/projected/46ad43f6-1cad-42ca-a698-1cb5cd2771a4-kube-api-access-p5tls\") pod \"kube-proxy-czjhk\" (UID: \"46ad43f6-1cad-42ca-a698-1cb5cd2771a4\") " pod="kube-system/kube-proxy-czjhk" Nov 1 00:37:56.083595 kubelet[2931]: E1101 00:37:56.083048 2931 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 1 00:37:56.083595 kubelet[2931]: E1101 00:37:56.083102 2931 projected.go:194] Error preparing data for projected volume kube-api-access-p5tls for pod kube-system/kube-proxy-czjhk: configmap "kube-root-ca.crt" not found Nov 1 00:37:56.083595 kubelet[2931]: E1101 00:37:56.083208 2931 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46ad43f6-1cad-42ca-a698-1cb5cd2771a4-kube-api-access-p5tls podName:46ad43f6-1cad-42ca-a698-1cb5cd2771a4 nodeName:}" failed. No retries permitted until 2025-11-01 00:37:56.583155952 +0000 UTC m=+4.967790346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5tls" (UniqueName: "kubernetes.io/projected/46ad43f6-1cad-42ca-a698-1cb5cd2771a4-kube-api-access-p5tls") pod "kube-proxy-czjhk" (UID: "46ad43f6-1cad-42ca-a698-1cb5cd2771a4") : configmap "kube-root-ca.crt" not found Nov 1 00:37:56.386175 systemd[1]: Created slice kubepods-besteffort-pod99ae42f9_438a_4aef_ba7b_6be3770b4a89.slice - libcontainer container kubepods-besteffort-pod99ae42f9_438a_4aef_ba7b_6be3770b4a89.slice. Nov 1 00:37:56.472790 kubelet[2931]: I1101 00:37:56.472669 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc2z5\" (UniqueName: \"kubernetes.io/projected/99ae42f9-438a-4aef-ba7b-6be3770b4a89-kube-api-access-pc2z5\") pod \"tigera-operator-7dcd859c48-dsq5g\" (UID: \"99ae42f9-438a-4aef-ba7b-6be3770b4a89\") " pod="tigera-operator/tigera-operator-7dcd859c48-dsq5g" Nov 1 00:37:56.472790 kubelet[2931]: I1101 00:37:56.472769 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99ae42f9-438a-4aef-ba7b-6be3770b4a89-var-lib-calico\") pod \"tigera-operator-7dcd859c48-dsq5g\" (UID: \"99ae42f9-438a-4aef-ba7b-6be3770b4a89\") " pod="tigera-operator/tigera-operator-7dcd859c48-dsq5g" Nov 1 00:37:56.692220 containerd[1648]: time="2025-11-01T00:37:56.692017625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dsq5g,Uid:99ae42f9-438a-4aef-ba7b-6be3770b4a89,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:37:56.723158 containerd[1648]: time="2025-11-01T00:37:56.723100806Z" level=info msg="connecting to shim 92ef274e7976efff3096779f34b3e11d5c5bb861b258d0fc44eed4813ce197a2" address="unix:///run/containerd/s/88b1f3b61a368cf7c9831609778766975bfcb5fde8ec007f288b5ce3593242f8" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:37:56.761716 systemd[1]: Started cri-containerd-92ef274e7976efff3096779f34b3e11d5c5bb861b258d0fc44eed4813ce197a2.scope - libcontainer container 92ef274e7976efff3096779f34b3e11d5c5bb861b258d0fc44eed4813ce197a2. Nov 1 00:37:56.813892 containerd[1648]: time="2025-11-01T00:37:56.813663639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czjhk,Uid:46ad43f6-1cad-42ca-a698-1cb5cd2771a4,Namespace:kube-system,Attempt:0,}" Nov 1 00:37:56.847401 containerd[1648]: time="2025-11-01T00:37:56.847343272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-dsq5g,Uid:99ae42f9-438a-4aef-ba7b-6be3770b4a89,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"92ef274e7976efff3096779f34b3e11d5c5bb861b258d0fc44eed4813ce197a2\"" Nov 1 00:37:56.852724 containerd[1648]: time="2025-11-01T00:37:56.852680317Z" level=info msg="connecting to shim 4dfa7904a34e9fdf1b0c79ae0dd8daeb90fcdf5414de4a123fc8f260e4211441" address="unix:///run/containerd/s/17b5de1e4e50d02c9f8d610b22f83950d8af424878e5e1c5e05c7d88a695890d" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:37:56.856400 containerd[1648]: time="2025-11-01T00:37:56.856299619Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:37:56.892696 systemd[1]: Started cri-containerd-4dfa7904a34e9fdf1b0c79ae0dd8daeb90fcdf5414de4a123fc8f260e4211441.scope - libcontainer container 4dfa7904a34e9fdf1b0c79ae0dd8daeb90fcdf5414de4a123fc8f260e4211441. Nov 1 00:37:56.937057 containerd[1648]: time="2025-11-01T00:37:56.936995774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czjhk,Uid:46ad43f6-1cad-42ca-a698-1cb5cd2771a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dfa7904a34e9fdf1b0c79ae0dd8daeb90fcdf5414de4a123fc8f260e4211441\"" Nov 1 00:37:56.943042 containerd[1648]: time="2025-11-01T00:37:56.942931524Z" level=info msg="CreateContainer within sandbox \"4dfa7904a34e9fdf1b0c79ae0dd8daeb90fcdf5414de4a123fc8f260e4211441\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:37:56.954991 containerd[1648]: time="2025-11-01T00:37:56.954918760Z" level=info msg="Container 555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:37:56.965187 containerd[1648]: time="2025-11-01T00:37:56.965146672Z" level=info msg="CreateContainer within sandbox \"4dfa7904a34e9fdf1b0c79ae0dd8daeb90fcdf5414de4a123fc8f260e4211441\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794\"" Nov 1 00:37:56.966326 containerd[1648]: time="2025-11-01T00:37:56.965909166Z" level=info msg="StartContainer for \"555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794\"" Nov 1 00:37:56.969184 containerd[1648]: time="2025-11-01T00:37:56.969149977Z" level=info msg="connecting to shim 555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794" address="unix:///run/containerd/s/17b5de1e4e50d02c9f8d610b22f83950d8af424878e5e1c5e05c7d88a695890d" protocol=ttrpc version=3 Nov 1 00:37:57.000776 systemd[1]: Started cri-containerd-555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794.scope - libcontainer container 555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794. Nov 1 00:37:57.076830 containerd[1648]: time="2025-11-01T00:37:57.076598703Z" level=info msg="StartContainer for \"555436846a426a959e1b29d4cbba98c1875a032df1023936651d23836bb10794\" returns successfully" Nov 1 00:37:58.633811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331864056.mount: Deactivated successfully. Nov 1 00:37:58.940662 kubelet[2931]: I1101 00:37:58.940382 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-czjhk" podStartSLOduration=3.940359868 podStartE2EDuration="3.940359868s" podCreationTimestamp="2025-11-01 00:37:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:37:57.984708954 +0000 UTC m=+6.369343369" watchObservedRunningTime="2025-11-01 00:37:58.940359868 +0000 UTC m=+7.324994277" Nov 1 00:37:59.673509 containerd[1648]: time="2025-11-01T00:37:59.672772728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:59.674745 containerd[1648]: time="2025-11-01T00:37:59.674702528Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:37:59.675572 containerd[1648]: time="2025-11-01T00:37:59.675540440Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:59.679307 containerd[1648]: time="2025-11-01T00:37:59.679275646Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:37:59.681139 containerd[1648]: time="2025-11-01T00:37:59.681105844Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.824764119s" Nov 1 00:37:59.681302 containerd[1648]: time="2025-11-01T00:37:59.681275830Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:37:59.685498 containerd[1648]: time="2025-11-01T00:37:59.685447251Z" level=info msg="CreateContainer within sandbox \"92ef274e7976efff3096779f34b3e11d5c5bb861b258d0fc44eed4813ce197a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:37:59.700538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579914070.mount: Deactivated successfully. Nov 1 00:37:59.704646 containerd[1648]: time="2025-11-01T00:37:59.704610952Z" level=info msg="Container de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:37:59.713622 containerd[1648]: time="2025-11-01T00:37:59.713588796Z" level=info msg="CreateContainer within sandbox \"92ef274e7976efff3096779f34b3e11d5c5bb861b258d0fc44eed4813ce197a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5\"" Nov 1 00:37:59.714826 containerd[1648]: time="2025-11-01T00:37:59.714718518Z" level=info msg="StartContainer for \"de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5\"" Nov 1 00:37:59.716289 containerd[1648]: time="2025-11-01T00:37:59.716211925Z" level=info msg="connecting to shim de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5" address="unix:///run/containerd/s/88b1f3b61a368cf7c9831609778766975bfcb5fde8ec007f288b5ce3593242f8" protocol=ttrpc version=3 Nov 1 00:37:59.751730 systemd[1]: Started cri-containerd-de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5.scope - libcontainer container de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5. Nov 1 00:37:59.804708 containerd[1648]: time="2025-11-01T00:37:59.804584845Z" level=info msg="StartContainer for \"de3b835260330fda77c5c1d2f2ef31a56399b68bf53374afe13c4b73749cb2b5\" returns successfully" Nov 1 00:38:00.039883 kubelet[2931]: I1101 00:38:00.039564 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-dsq5g" podStartSLOduration=1.20930346 podStartE2EDuration="4.039542777s" podCreationTimestamp="2025-11-01 00:37:56 +0000 UTC" firstStartedPulling="2025-11-01 00:37:56.852100762 +0000 UTC m=+5.236735162" lastFinishedPulling="2025-11-01 00:37:59.682340084 +0000 UTC m=+8.066974479" observedRunningTime="2025-11-01 00:37:59.988392354 +0000 UTC m=+8.373026773" watchObservedRunningTime="2025-11-01 00:38:00.039542777 +0000 UTC m=+8.424177184" Nov 1 00:38:07.343847 sudo[1929]: pam_unix(sudo:session): session closed for user root Nov 1 00:38:07.493470 sshd[1928]: Connection closed by 139.178.89.65 port 47594 Nov 1 00:38:07.495595 sshd-session[1925]: pam_unix(sshd:session): session closed for user core Nov 1 00:38:07.505373 systemd[1]: sshd@6-10.230.36.206:22-139.178.89.65:47594.service: Deactivated successfully. Nov 1 00:38:07.512351 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:38:07.515672 systemd[1]: session-9.scope: Consumed 5.733s CPU time, 155.3M memory peak. Nov 1 00:38:07.520864 systemd-logind[1630]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:38:07.526800 systemd-logind[1630]: Removed session 9. Nov 1 00:38:13.857463 systemd[1]: Created slice kubepods-besteffort-podb6978bf8_8695_4c9f_b328_68dac6c050a4.slice - libcontainer container kubepods-besteffort-podb6978bf8_8695_4c9f_b328_68dac6c050a4.slice. Nov 1 00:38:13.901241 kubelet[2931]: I1101 00:38:13.900896 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpwtf\" (UniqueName: \"kubernetes.io/projected/b6978bf8-8695-4c9f-b328-68dac6c050a4-kube-api-access-wpwtf\") pod \"calico-typha-7c77675777-jp2j8\" (UID: \"b6978bf8-8695-4c9f-b328-68dac6c050a4\") " pod="calico-system/calico-typha-7c77675777-jp2j8" Nov 1 00:38:13.901241 kubelet[2931]: I1101 00:38:13.900968 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6978bf8-8695-4c9f-b328-68dac6c050a4-tigera-ca-bundle\") pod \"calico-typha-7c77675777-jp2j8\" (UID: \"b6978bf8-8695-4c9f-b328-68dac6c050a4\") " pod="calico-system/calico-typha-7c77675777-jp2j8" Nov 1 00:38:13.901241 kubelet[2931]: I1101 00:38:13.900996 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6978bf8-8695-4c9f-b328-68dac6c050a4-typha-certs\") pod \"calico-typha-7c77675777-jp2j8\" (UID: \"b6978bf8-8695-4c9f-b328-68dac6c050a4\") " pod="calico-system/calico-typha-7c77675777-jp2j8" Nov 1 00:38:14.085616 systemd[1]: Created slice kubepods-besteffort-podb320af88_bb45_4fb8_88cb_99b2071aaa74.slice - libcontainer container kubepods-besteffort-podb320af88_bb45_4fb8_88cb_99b2071aaa74.slice. Nov 1 00:38:14.102471 kubelet[2931]: I1101 00:38:14.102425 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-lib-modules\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.102879 kubelet[2931]: I1101 00:38:14.102816 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2tnn\" (UniqueName: \"kubernetes.io/projected/b320af88-bb45-4fb8-88cb-99b2071aaa74-kube-api-access-q2tnn\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.103251 kubelet[2931]: I1101 00:38:14.102908 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b320af88-bb45-4fb8-88cb-99b2071aaa74-tigera-ca-bundle\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.103251 kubelet[2931]: I1101 00:38:14.102941 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-cni-net-dir\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.103251 kubelet[2931]: I1101 00:38:14.102971 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-policysync\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.103251 kubelet[2931]: I1101 00:38:14.103008 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-var-lib-calico\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.103251 kubelet[2931]: I1101 00:38:14.103034 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-cni-bin-dir\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.104537 kubelet[2931]: I1101 00:38:14.103068 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-flexvol-driver-host\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.104537 kubelet[2931]: I1101 00:38:14.103097 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-xtables-lock\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.104537 kubelet[2931]: I1101 00:38:14.103123 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b320af88-bb45-4fb8-88cb-99b2071aaa74-node-certs\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.104537 kubelet[2931]: I1101 00:38:14.103151 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-cni-log-dir\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.104537 kubelet[2931]: I1101 00:38:14.103175 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b320af88-bb45-4fb8-88cb-99b2071aaa74-var-run-calico\") pod \"calico-node-j4znc\" (UID: \"b320af88-bb45-4fb8-88cb-99b2071aaa74\") " pod="calico-system/calico-node-j4znc" Nov 1 00:38:14.166680 containerd[1648]: time="2025-11-01T00:38:14.165986224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c77675777-jp2j8,Uid:b6978bf8-8695-4c9f-b328-68dac6c050a4,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:14.242307 containerd[1648]: time="2025-11-01T00:38:14.242184937Z" level=info msg="connecting to shim 62ca3cbb1652bcf9ef007279755173e6b64bc61cbee8b324d4a12239a4ecdbcc" address="unix:///run/containerd/s/348f10f8f13a1f63da561042a9e7dcd162d49a8703c46a223cf97edc6c0d30cf" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:14.252102 kubelet[2931]: E1101 00:38:14.252019 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.252102 kubelet[2931]: W1101 00:38:14.252057 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.253710 kubelet[2931]: E1101 00:38:14.253676 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.305748 systemd[1]: Started cri-containerd-62ca3cbb1652bcf9ef007279755173e6b64bc61cbee8b324d4a12239a4ecdbcc.scope - libcontainer container 62ca3cbb1652bcf9ef007279755173e6b64bc61cbee8b324d4a12239a4ecdbcc. Nov 1 00:38:14.317089 kubelet[2931]: E1101 00:38:14.316997 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:14.389274 kubelet[2931]: E1101 00:38:14.388825 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.389274 kubelet[2931]: W1101 00:38:14.388980 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.389274 kubelet[2931]: E1101 00:38:14.389014 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.389956 kubelet[2931]: E1101 00:38:14.389911 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.389956 kubelet[2931]: W1101 00:38:14.389932 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.390080 kubelet[2931]: E1101 00:38:14.389970 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.390530 kubelet[2931]: E1101 00:38:14.390277 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.390530 kubelet[2931]: W1101 00:38:14.390527 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.390855 kubelet[2931]: E1101 00:38:14.390547 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.391254 kubelet[2931]: E1101 00:38:14.391215 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.391254 kubelet[2931]: W1101 00:38:14.391236 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.391535 kubelet[2931]: E1101 00:38:14.391322 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.392098 kubelet[2931]: E1101 00:38:14.391925 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.392098 kubelet[2931]: W1101 00:38:14.391939 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.392098 kubelet[2931]: E1101 00:38:14.391964 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.393054 kubelet[2931]: E1101 00:38:14.392999 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.393054 kubelet[2931]: W1101 00:38:14.393020 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.393054 kubelet[2931]: E1101 00:38:14.393037 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.393499 kubelet[2931]: E1101 00:38:14.393350 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.393499 kubelet[2931]: W1101 00:38:14.393377 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.393499 kubelet[2931]: E1101 00:38:14.393393 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.394318 kubelet[2931]: E1101 00:38:14.394262 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.394318 kubelet[2931]: W1101 00:38:14.394293 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.394318 kubelet[2931]: E1101 00:38:14.394311 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.394728 kubelet[2931]: E1101 00:38:14.394669 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.394728 kubelet[2931]: W1101 00:38:14.394722 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.394810 kubelet[2931]: E1101 00:38:14.394746 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.395152 kubelet[2931]: E1101 00:38:14.395128 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.395152 kubelet[2931]: W1101 00:38:14.395148 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.395734 kubelet[2931]: E1101 00:38:14.395164 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.395734 kubelet[2931]: E1101 00:38:14.395414 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.395734 kubelet[2931]: W1101 00:38:14.395428 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.395734 kubelet[2931]: E1101 00:38:14.395448 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.396645 kubelet[2931]: E1101 00:38:14.396476 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.396645 kubelet[2931]: W1101 00:38:14.396529 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.396645 kubelet[2931]: E1101 00:38:14.396545 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.397518 kubelet[2931]: E1101 00:38:14.397417 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.399307 kubelet[2931]: W1101 00:38:14.397597 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.399307 kubelet[2931]: E1101 00:38:14.397632 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.399307 kubelet[2931]: E1101 00:38:14.398083 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.399307 kubelet[2931]: W1101 00:38:14.398096 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.399307 kubelet[2931]: E1101 00:38:14.398111 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.399307 kubelet[2931]: E1101 00:38:14.398592 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.399307 kubelet[2931]: W1101 00:38:14.398606 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.399307 kubelet[2931]: E1101 00:38:14.398621 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.399307 kubelet[2931]: E1101 00:38:14.399057 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.399307 kubelet[2931]: W1101 00:38:14.399071 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.399782 containerd[1648]: time="2025-11-01T00:38:14.398087221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4znc,Uid:b320af88-bb45-4fb8-88cb-99b2071aaa74,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:14.399861 kubelet[2931]: E1101 00:38:14.399085 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.400692 kubelet[2931]: E1101 00:38:14.400666 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.400782 kubelet[2931]: W1101 00:38:14.400736 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.400782 kubelet[2931]: E1101 00:38:14.400757 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.401259 kubelet[2931]: E1101 00:38:14.401201 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.401785 kubelet[2931]: W1101 00:38:14.401338 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.401785 kubelet[2931]: E1101 00:38:14.401365 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.401785 kubelet[2931]: E1101 00:38:14.401777 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.401969 kubelet[2931]: W1101 00:38:14.401791 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.401969 kubelet[2931]: E1101 00:38:14.401845 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.402711 kubelet[2931]: E1101 00:38:14.402203 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.402711 kubelet[2931]: W1101 00:38:14.402279 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.402711 kubelet[2931]: E1101 00:38:14.402342 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.405775 kubelet[2931]: E1101 00:38:14.405651 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.405775 kubelet[2931]: W1101 00:38:14.405673 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.405775 kubelet[2931]: E1101 00:38:14.405690 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.405775 kubelet[2931]: I1101 00:38:14.405739 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12234797-91a4-4e56-83d9-8fb50717e71b-kubelet-dir\") pod \"csi-node-driver-rbrtf\" (UID: \"12234797-91a4-4e56-83d9-8fb50717e71b\") " pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:14.407557 kubelet[2931]: E1101 00:38:14.405997 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.407557 kubelet[2931]: W1101 00:38:14.406012 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.407557 kubelet[2931]: E1101 00:38:14.406047 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.407557 kubelet[2931]: I1101 00:38:14.406072 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4brpf\" (UniqueName: \"kubernetes.io/projected/12234797-91a4-4e56-83d9-8fb50717e71b-kube-api-access-4brpf\") pod \"csi-node-driver-rbrtf\" (UID: \"12234797-91a4-4e56-83d9-8fb50717e71b\") " pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:14.407557 kubelet[2931]: E1101 00:38:14.406614 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.407557 kubelet[2931]: W1101 00:38:14.406628 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.407557 kubelet[2931]: E1101 00:38:14.406644 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.407557 kubelet[2931]: E1101 00:38:14.406958 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.407557 kubelet[2931]: W1101 00:38:14.406973 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.407963 kubelet[2931]: E1101 00:38:14.406990 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.407963 kubelet[2931]: E1101 00:38:14.407231 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.407963 kubelet[2931]: W1101 00:38:14.407243 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.407963 kubelet[2931]: E1101 00:38:14.407277 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.407963 kubelet[2931]: E1101 00:38:14.407577 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.407963 kubelet[2931]: W1101 00:38:14.407594 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.407963 kubelet[2931]: E1101 00:38:14.407644 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.410245 kubelet[2931]: E1101 00:38:14.408704 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.410245 kubelet[2931]: W1101 00:38:14.408724 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.410245 kubelet[2931]: E1101 00:38:14.408740 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.410245 kubelet[2931]: I1101 00:38:14.408786 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12234797-91a4-4e56-83d9-8fb50717e71b-socket-dir\") pod \"csi-node-driver-rbrtf\" (UID: \"12234797-91a4-4e56-83d9-8fb50717e71b\") " pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:14.410245 kubelet[2931]: E1101 00:38:14.409045 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.410245 kubelet[2931]: W1101 00:38:14.409058 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.410245 kubelet[2931]: E1101 00:38:14.409090 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.410245 kubelet[2931]: E1101 00:38:14.409351 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.410245 kubelet[2931]: W1101 00:38:14.409364 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.411915 kubelet[2931]: E1101 00:38:14.409397 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.411915 kubelet[2931]: E1101 00:38:14.409644 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.411915 kubelet[2931]: W1101 00:38:14.409659 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.411915 kubelet[2931]: E1101 00:38:14.409682 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.411915 kubelet[2931]: I1101 00:38:14.409725 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12234797-91a4-4e56-83d9-8fb50717e71b-registration-dir\") pod \"csi-node-driver-rbrtf\" (UID: \"12234797-91a4-4e56-83d9-8fb50717e71b\") " pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:14.411915 kubelet[2931]: E1101 00:38:14.410034 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.411915 kubelet[2931]: W1101 00:38:14.410049 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.411915 kubelet[2931]: E1101 00:38:14.410076 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.411915 kubelet[2931]: E1101 00:38:14.410571 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.415803 kubelet[2931]: W1101 00:38:14.410589 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.415803 kubelet[2931]: E1101 00:38:14.410616 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.415803 kubelet[2931]: I1101 00:38:14.410642 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/12234797-91a4-4e56-83d9-8fb50717e71b-varrun\") pod \"csi-node-driver-rbrtf\" (UID: \"12234797-91a4-4e56-83d9-8fb50717e71b\") " pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:14.415803 kubelet[2931]: E1101 00:38:14.411165 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.415803 kubelet[2931]: W1101 00:38:14.411182 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.415803 kubelet[2931]: E1101 00:38:14.411198 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.416359 kubelet[2931]: E1101 00:38:14.416150 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.416359 kubelet[2931]: W1101 00:38:14.416179 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.416359 kubelet[2931]: E1101 00:38:14.416197 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.417118 kubelet[2931]: E1101 00:38:14.416506 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.417118 kubelet[2931]: W1101 00:38:14.416520 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.417118 kubelet[2931]: E1101 00:38:14.416535 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.514510 kubelet[2931]: E1101 00:38:14.512730 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.515401 kubelet[2931]: W1101 00:38:14.514756 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.515401 kubelet[2931]: E1101 00:38:14.514808 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.516196 kubelet[2931]: E1101 00:38:14.515951 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.516518 kubelet[2931]: W1101 00:38:14.516376 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.517159 kubelet[2931]: E1101 00:38:14.517136 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.519077 kubelet[2931]: E1101 00:38:14.518848 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.519077 kubelet[2931]: W1101 00:38:14.518869 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.519077 kubelet[2931]: E1101 00:38:14.518893 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.520512 kubelet[2931]: E1101 00:38:14.520033 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.520659 kubelet[2931]: W1101 00:38:14.520625 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.522013 kubelet[2931]: E1101 00:38:14.521474 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.523260 kubelet[2931]: E1101 00:38:14.522582 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.523260 kubelet[2931]: W1101 00:38:14.523039 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.527509 kubelet[2931]: E1101 00:38:14.526993 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.527509 kubelet[2931]: W1101 00:38:14.527016 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.527509 kubelet[2931]: E1101 00:38:14.527324 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.527509 kubelet[2931]: W1101 00:38:14.527339 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.527509 kubelet[2931]: E1101 00:38:14.527357 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.529372 kubelet[2931]: E1101 00:38:14.528716 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.529372 kubelet[2931]: E1101 00:38:14.528781 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.530419 kubelet[2931]: E1101 00:38:14.529638 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.530419 kubelet[2931]: W1101 00:38:14.529652 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.530419 kubelet[2931]: E1101 00:38:14.529677 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.531713 kubelet[2931]: E1101 00:38:14.531691 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.531827 kubelet[2931]: W1101 00:38:14.531791 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.532165 kubelet[2931]: E1101 00:38:14.531922 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.534599 kubelet[2931]: E1101 00:38:14.534359 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.534599 kubelet[2931]: W1101 00:38:14.534389 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.534599 kubelet[2931]: E1101 00:38:14.534413 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.535985 kubelet[2931]: E1101 00:38:14.535889 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.536396 kubelet[2931]: W1101 00:38:14.536330 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.537069 kubelet[2931]: E1101 00:38:14.536817 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.538523 kubelet[2931]: E1101 00:38:14.538149 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.539176 kubelet[2931]: W1101 00:38:14.538658 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.540074 kubelet[2931]: E1101 00:38:14.539300 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.541367 kubelet[2931]: E1101 00:38:14.541116 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.541367 kubelet[2931]: W1101 00:38:14.541138 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.541367 kubelet[2931]: E1101 00:38:14.541248 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.541873 kubelet[2931]: E1101 00:38:14.541844 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.542056 kubelet[2931]: W1101 00:38:14.541979 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.542747 kubelet[2931]: E1101 00:38:14.542579 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.543951 kubelet[2931]: E1101 00:38:14.543913 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.544269 kubelet[2931]: W1101 00:38:14.544136 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.545445 kubelet[2931]: E1101 00:38:14.544529 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.545792 kubelet[2931]: E1101 00:38:14.545771 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.546994 kubelet[2931]: W1101 00:38:14.546542 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.546994 kubelet[2931]: E1101 00:38:14.546615 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.546994 kubelet[2931]: E1101 00:38:14.546916 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.546994 kubelet[2931]: W1101 00:38:14.546930 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.547327 kubelet[2931]: E1101 00:38:14.547216 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.548310 kubelet[2931]: E1101 00:38:14.547889 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.548627 kubelet[2931]: W1101 00:38:14.548548 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.549295 kubelet[2931]: E1101 00:38:14.549171 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.550039 kubelet[2931]: E1101 00:38:14.549912 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.550039 kubelet[2931]: W1101 00:38:14.549933 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.550569 kubelet[2931]: E1101 00:38:14.550414 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.551917 kubelet[2931]: E1101 00:38:14.551000 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.552727 kubelet[2931]: W1101 00:38:14.552699 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.554087 kubelet[2931]: E1101 00:38:14.554064 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.555602 containerd[1648]: time="2025-11-01T00:38:14.554733863Z" level=info msg="connecting to shim 682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f" address="unix:///run/containerd/s/1635aa319885cd3a86e21b15bdc08dc832079cf1c8f05544bd6fb04266ba5a34" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:14.555808 kubelet[2931]: E1101 00:38:14.555779 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.556059 kubelet[2931]: W1101 00:38:14.555902 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.557370 kubelet[2931]: E1101 00:38:14.557157 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.558311 kubelet[2931]: E1101 00:38:14.557607 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.558311 kubelet[2931]: W1101 00:38:14.558191 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.561141 kubelet[2931]: E1101 00:38:14.560990 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.561141 kubelet[2931]: W1101 00:38:14.561011 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.563292 kubelet[2931]: E1101 00:38:14.562981 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.563292 kubelet[2931]: W1101 00:38:14.563227 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.563292 kubelet[2931]: E1101 00:38:14.563246 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.564685 kubelet[2931]: E1101 00:38:14.563887 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.569513 kubelet[2931]: E1101 00:38:14.568587 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.569513 kubelet[2931]: W1101 00:38:14.568621 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.569513 kubelet[2931]: E1101 00:38:14.568645 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.569513 kubelet[2931]: E1101 00:38:14.564531 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.579274 containerd[1648]: time="2025-11-01T00:38:14.579226741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c77675777-jp2j8,Uid:b6978bf8-8695-4c9f-b328-68dac6c050a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"62ca3cbb1652bcf9ef007279755173e6b64bc61cbee8b324d4a12239a4ecdbcc\"" Nov 1 00:38:14.583027 containerd[1648]: time="2025-11-01T00:38:14.582986315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:38:14.605860 kubelet[2931]: E1101 00:38:14.605808 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:14.605860 kubelet[2931]: W1101 00:38:14.605847 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:14.605860 kubelet[2931]: E1101 00:38:14.605873 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:14.632730 systemd[1]: Started cri-containerd-682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f.scope - libcontainer container 682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f. Nov 1 00:38:14.690597 containerd[1648]: time="2025-11-01T00:38:14.689308558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j4znc,Uid:b320af88-bb45-4fb8-88cb-99b2071aaa74,Namespace:calico-system,Attempt:0,} returns sandbox id \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\"" Nov 1 00:38:15.919511 kubelet[2931]: E1101 00:38:15.919158 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:16.079783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257469264.mount: Deactivated successfully. Nov 1 00:38:17.924012 kubelet[2931]: E1101 00:38:17.923535 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:18.418573 containerd[1648]: time="2025-11-01T00:38:18.417853438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:18.420919 containerd[1648]: time="2025-11-01T00:38:18.420717756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:38:18.421715 containerd[1648]: time="2025-11-01T00:38:18.421677917Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:18.425085 containerd[1648]: time="2025-11-01T00:38:18.425049057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:18.427473 containerd[1648]: time="2025-11-01T00:38:18.426801117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.843764638s" Nov 1 00:38:18.427473 containerd[1648]: time="2025-11-01T00:38:18.426845577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:38:18.430024 containerd[1648]: time="2025-11-01T00:38:18.429991450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:38:18.451179 containerd[1648]: time="2025-11-01T00:38:18.451104217Z" level=info msg="CreateContainer within sandbox \"62ca3cbb1652bcf9ef007279755173e6b64bc61cbee8b324d4a12239a4ecdbcc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:38:18.465955 containerd[1648]: time="2025-11-01T00:38:18.465852040Z" level=info msg="Container 9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:38:18.472232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1182909538.mount: Deactivated successfully. Nov 1 00:38:18.488209 containerd[1648]: time="2025-11-01T00:38:18.488049860Z" level=info msg="CreateContainer within sandbox \"62ca3cbb1652bcf9ef007279755173e6b64bc61cbee8b324d4a12239a4ecdbcc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525\"" Nov 1 00:38:18.489371 containerd[1648]: time="2025-11-01T00:38:18.489307608Z" level=info msg="StartContainer for \"9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525\"" Nov 1 00:38:18.493056 containerd[1648]: time="2025-11-01T00:38:18.492961369Z" level=info msg="connecting to shim 9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525" address="unix:///run/containerd/s/348f10f8f13a1f63da561042a9e7dcd162d49a8703c46a223cf97edc6c0d30cf" protocol=ttrpc version=3 Nov 1 00:38:18.533758 systemd[1]: Started cri-containerd-9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525.scope - libcontainer container 9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525. Nov 1 00:38:18.635529 containerd[1648]: time="2025-11-01T00:38:18.635422528Z" level=info msg="StartContainer for \"9977fd37ed0075c2810328c589a67e11ee4770e6675d423d4dfb9fa6933a2525\" returns successfully" Nov 1 00:38:19.083297 kubelet[2931]: I1101 00:38:19.082415 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c77675777-jp2j8" podStartSLOduration=2.23622911 podStartE2EDuration="6.082381246s" podCreationTimestamp="2025-11-01 00:38:13 +0000 UTC" firstStartedPulling="2025-11-01 00:38:14.582657459 +0000 UTC m=+22.967291851" lastFinishedPulling="2025-11-01 00:38:18.4288096 +0000 UTC m=+26.813443987" observedRunningTime="2025-11-01 00:38:19.081437755 +0000 UTC m=+27.466072181" watchObservedRunningTime="2025-11-01 00:38:19.082381246 +0000 UTC m=+27.467015642" Nov 1 00:38:19.140210 kubelet[2931]: E1101 00:38:19.139997 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.140210 kubelet[2931]: W1101 00:38:19.140033 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.142122 kubelet[2931]: E1101 00:38:19.142091 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.142540 kubelet[2931]: E1101 00:38:19.142518 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.142692 kubelet[2931]: W1101 00:38:19.142669 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.142813 kubelet[2931]: E1101 00:38:19.142792 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.143497 kubelet[2931]: E1101 00:38:19.143254 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.143497 kubelet[2931]: W1101 00:38:19.143274 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.143497 kubelet[2931]: E1101 00:38:19.143297 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.143801 kubelet[2931]: E1101 00:38:19.143780 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.143911 kubelet[2931]: W1101 00:38:19.143891 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.144201 kubelet[2931]: E1101 00:38:19.144002 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.144383 kubelet[2931]: E1101 00:38:19.144364 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.144507 kubelet[2931]: W1101 00:38:19.144464 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.144743 kubelet[2931]: E1101 00:38:19.144613 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.144904 kubelet[2931]: E1101 00:38:19.144885 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.145155 kubelet[2931]: W1101 00:38:19.144994 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.145155 kubelet[2931]: E1101 00:38:19.145019 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.145382 kubelet[2931]: E1101 00:38:19.145363 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.145510 kubelet[2931]: W1101 00:38:19.145471 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.145617 kubelet[2931]: E1101 00:38:19.145599 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.146119 kubelet[2931]: E1101 00:38:19.145940 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.146119 kubelet[2931]: W1101 00:38:19.145959 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.146119 kubelet[2931]: E1101 00:38:19.145976 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.146647 kubelet[2931]: E1101 00:38:19.146359 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.146647 kubelet[2931]: W1101 00:38:19.146374 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.146647 kubelet[2931]: E1101 00:38:19.146391 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.147185 kubelet[2931]: E1101 00:38:19.147020 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.147185 kubelet[2931]: W1101 00:38:19.147039 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.147185 kubelet[2931]: E1101 00:38:19.147055 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.147507 kubelet[2931]: E1101 00:38:19.147428 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.147507 kubelet[2931]: W1101 00:38:19.147447 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.147507 kubelet[2931]: E1101 00:38:19.147462 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.148091 kubelet[2931]: E1101 00:38:19.147931 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.148091 kubelet[2931]: W1101 00:38:19.147949 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.148091 kubelet[2931]: E1101 00:38:19.147964 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.148382 kubelet[2931]: E1101 00:38:19.148363 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.148527 kubelet[2931]: W1101 00:38:19.148472 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.148644 kubelet[2931]: E1101 00:38:19.148624 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.149148 kubelet[2931]: E1101 00:38:19.148971 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.149148 kubelet[2931]: W1101 00:38:19.148999 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.149148 kubelet[2931]: E1101 00:38:19.149017 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.149570 kubelet[2931]: E1101 00:38:19.149550 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.149769 kubelet[2931]: W1101 00:38:19.149665 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.149769 kubelet[2931]: E1101 00:38:19.149689 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.172958 kubelet[2931]: E1101 00:38:19.172915 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.172958 kubelet[2931]: W1101 00:38:19.172948 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.173448 kubelet[2931]: E1101 00:38:19.172986 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.173448 kubelet[2931]: E1101 00:38:19.173300 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.173448 kubelet[2931]: W1101 00:38:19.173314 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.173448 kubelet[2931]: E1101 00:38:19.173337 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.173919 kubelet[2931]: E1101 00:38:19.173639 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.173919 kubelet[2931]: W1101 00:38:19.173675 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.173919 kubelet[2931]: E1101 00:38:19.173698 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.174379 kubelet[2931]: E1101 00:38:19.174212 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.174379 kubelet[2931]: W1101 00:38:19.174239 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.174379 kubelet[2931]: E1101 00:38:19.174295 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.174867 kubelet[2931]: E1101 00:38:19.174848 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.175061 kubelet[2931]: W1101 00:38:19.174963 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.175061 kubelet[2931]: E1101 00:38:19.174998 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.176300 kubelet[2931]: E1101 00:38:19.176169 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.176300 kubelet[2931]: W1101 00:38:19.176189 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.176634 kubelet[2931]: E1101 00:38:19.176614 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.176854 kubelet[2931]: W1101 00:38:19.176730 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.177016 kubelet[2931]: E1101 00:38:19.176997 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.177266 kubelet[2931]: W1101 00:38:19.177092 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.177266 kubelet[2931]: E1101 00:38:19.177116 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.177544 kubelet[2931]: E1101 00:38:19.177521 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.177669 kubelet[2931]: E1101 00:38:19.177649 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.177974 kubelet[2931]: E1101 00:38:19.177822 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.177974 kubelet[2931]: W1101 00:38:19.177885 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.177974 kubelet[2931]: E1101 00:38:19.177912 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.178502 kubelet[2931]: E1101 00:38:19.178398 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.178502 kubelet[2931]: W1101 00:38:19.178417 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.178502 kubelet[2931]: E1101 00:38:19.178444 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.178787 kubelet[2931]: E1101 00:38:19.178764 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.178859 kubelet[2931]: W1101 00:38:19.178787 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.178859 kubelet[2931]: E1101 00:38:19.178813 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.179327 kubelet[2931]: E1101 00:38:19.179220 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.179327 kubelet[2931]: W1101 00:38:19.179239 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.179327 kubelet[2931]: E1101 00:38:19.179267 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.179872 kubelet[2931]: E1101 00:38:19.179738 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.179872 kubelet[2931]: W1101 00:38:19.179756 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.179872 kubelet[2931]: E1101 00:38:19.179782 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.180376 kubelet[2931]: E1101 00:38:19.180264 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.180376 kubelet[2931]: W1101 00:38:19.180281 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.180376 kubelet[2931]: E1101 00:38:19.180331 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.180878 kubelet[2931]: E1101 00:38:19.180829 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.180878 kubelet[2931]: W1101 00:38:19.180847 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.181100 kubelet[2931]: E1101 00:38:19.181079 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.181992 kubelet[2931]: E1101 00:38:19.181972 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.182184 kubelet[2931]: W1101 00:38:19.182093 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.182184 kubelet[2931]: E1101 00:38:19.182165 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.182568 kubelet[2931]: E1101 00:38:19.182549 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.182816 kubelet[2931]: W1101 00:38:19.182678 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.182816 kubelet[2931]: E1101 00:38:19.182731 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.183039 kubelet[2931]: E1101 00:38:19.183021 2931 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:38:19.183179 kubelet[2931]: W1101 00:38:19.183106 2931 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:38:19.183179 kubelet[2931]: E1101 00:38:19.183142 2931 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:38:19.878598 containerd[1648]: time="2025-11-01T00:38:19.878049790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:19.878598 containerd[1648]: time="2025-11-01T00:38:19.878552704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:38:19.879690 containerd[1648]: time="2025-11-01T00:38:19.879653877Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:19.882474 containerd[1648]: time="2025-11-01T00:38:19.882438952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:19.883517 containerd[1648]: time="2025-11-01T00:38:19.883318661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.453040791s" Nov 1 00:38:19.883517 containerd[1648]: time="2025-11-01T00:38:19.883360423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:38:19.887413 containerd[1648]: time="2025-11-01T00:38:19.887197627Z" level=info msg="CreateContainer within sandbox \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:38:19.899124 containerd[1648]: time="2025-11-01T00:38:19.898613206Z" level=info msg="Container 1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:38:19.914863 containerd[1648]: time="2025-11-01T00:38:19.914743104Z" level=info msg="CreateContainer within sandbox \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\"" Nov 1 00:38:19.915796 containerd[1648]: time="2025-11-01T00:38:19.915744932Z" level=info msg="StartContainer for \"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\"" Nov 1 00:38:19.918853 containerd[1648]: time="2025-11-01T00:38:19.918811290Z" level=info msg="connecting to shim 1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6" address="unix:///run/containerd/s/1635aa319885cd3a86e21b15bdc08dc832079cf1c8f05544bd6fb04266ba5a34" protocol=ttrpc version=3 Nov 1 00:38:19.919423 kubelet[2931]: E1101 00:38:19.919364 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:19.958945 systemd[1]: Started cri-containerd-1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6.scope - libcontainer container 1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6. Nov 1 00:38:20.042260 containerd[1648]: time="2025-11-01T00:38:20.042197606Z" level=info msg="StartContainer for \"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\" returns successfully" Nov 1 00:38:20.053713 systemd[1]: cri-containerd-1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6.scope: Deactivated successfully. Nov 1 00:38:20.076514 kubelet[2931]: I1101 00:38:20.076283 2931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:38:20.091322 containerd[1648]: time="2025-11-01T00:38:20.091138801Z" level=info msg="received exit event container_id:\"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\" id:\"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\" pid:3603 exited_at:{seconds:1761957500 nanos:58280189}" Nov 1 00:38:20.126761 containerd[1648]: time="2025-11-01T00:38:20.126698730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\" id:\"1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6\" pid:3603 exited_at:{seconds:1761957500 nanos:58280189}" Nov 1 00:38:20.144241 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d8b981530349611d8af7d82165350a944ab2458555b58b681dca986610f42c6-rootfs.mount: Deactivated successfully. Nov 1 00:38:21.083306 containerd[1648]: time="2025-11-01T00:38:21.083110604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:38:21.919436 kubelet[2931]: E1101 00:38:21.918403 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:23.919289 kubelet[2931]: E1101 00:38:23.919088 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:25.919198 kubelet[2931]: E1101 00:38:25.919078 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:26.267598 containerd[1648]: time="2025-11-01T00:38:26.266968702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:26.269415 containerd[1648]: time="2025-11-01T00:38:26.269378949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:38:26.270102 containerd[1648]: time="2025-11-01T00:38:26.270045810Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:26.273993 containerd[1648]: time="2025-11-01T00:38:26.273952583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:26.275026 containerd[1648]: time="2025-11-01T00:38:26.274989190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.191819522s" Nov 1 00:38:26.275167 containerd[1648]: time="2025-11-01T00:38:26.275139132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:38:26.279415 containerd[1648]: time="2025-11-01T00:38:26.279377588Z" level=info msg="CreateContainer within sandbox \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:38:26.293698 containerd[1648]: time="2025-11-01T00:38:26.293660019Z" level=info msg="Container 00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:38:26.295820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171179800.mount: Deactivated successfully. Nov 1 00:38:26.307198 containerd[1648]: time="2025-11-01T00:38:26.307151208Z" level=info msg="CreateContainer within sandbox \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\"" Nov 1 00:38:26.309051 containerd[1648]: time="2025-11-01T00:38:26.309016760Z" level=info msg="StartContainer for \"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\"" Nov 1 00:38:26.312601 containerd[1648]: time="2025-11-01T00:38:26.312559125Z" level=info msg="connecting to shim 00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1" address="unix:///run/containerd/s/1635aa319885cd3a86e21b15bdc08dc832079cf1c8f05544bd6fb04266ba5a34" protocol=ttrpc version=3 Nov 1 00:38:26.349694 systemd[1]: Started cri-containerd-00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1.scope - libcontainer container 00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1. Nov 1 00:38:26.418713 containerd[1648]: time="2025-11-01T00:38:26.418569691Z" level=info msg="StartContainer for \"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\" returns successfully" Nov 1 00:38:27.228683 kubelet[2931]: I1101 00:38:27.228633 2931 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:38:27.628330 systemd[1]: cri-containerd-00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1.scope: Deactivated successfully. Nov 1 00:38:27.628845 systemd[1]: cri-containerd-00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1.scope: Consumed 722ms CPU time, 156.8M memory peak, 5.6M read from disk, 171.3M written to disk. Nov 1 00:38:27.634038 containerd[1648]: time="2025-11-01T00:38:27.633832552Z" level=info msg="received exit event container_id:\"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\" id:\"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\" pid:3660 exited_at:{seconds:1761957507 nanos:633506789}" Nov 1 00:38:27.636186 containerd[1648]: time="2025-11-01T00:38:27.636122905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\" id:\"00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1\" pid:3660 exited_at:{seconds:1761957507 nanos:633506789}" Nov 1 00:38:27.711846 kubelet[2931]: I1101 00:38:27.711775 2931 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:38:27.798529 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00432e827e6240a0e4d2ff2cf3bf246da19e2c7a5e7a8c42fd136ba7a3414aa1-rootfs.mount: Deactivated successfully. Nov 1 00:38:27.813618 systemd[1]: Created slice kubepods-burstable-pod4519991b_c9e6_4c9d_9f5b_daa009fd2509.slice - libcontainer container kubepods-burstable-pod4519991b_c9e6_4c9d_9f5b_daa009fd2509.slice. Nov 1 00:38:27.884097 kubelet[2931]: I1101 00:38:27.883784 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tng9h\" (UniqueName: \"kubernetes.io/projected/4519991b-c9e6-4c9d-9f5b-daa009fd2509-kube-api-access-tng9h\") pod \"coredns-668d6bf9bc-26pbk\" (UID: \"4519991b-c9e6-4c9d-9f5b-daa009fd2509\") " pod="kube-system/coredns-668d6bf9bc-26pbk" Nov 1 00:38:27.884097 kubelet[2931]: I1101 00:38:27.883851 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/882832a0-46d3-43b4-82bb-ea5df649d892-goldmane-key-pair\") pod \"goldmane-666569f655-dh7wf\" (UID: \"882832a0-46d3-43b4-82bb-ea5df649d892\") " pod="calico-system/goldmane-666569f655-dh7wf" Nov 1 00:38:27.884097 kubelet[2931]: I1101 00:38:27.883898 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6vvx\" (UniqueName: \"kubernetes.io/projected/882832a0-46d3-43b4-82bb-ea5df649d892-kube-api-access-q6vvx\") pod \"goldmane-666569f655-dh7wf\" (UID: \"882832a0-46d3-43b4-82bb-ea5df649d892\") " pod="calico-system/goldmane-666569f655-dh7wf" Nov 1 00:38:27.884097 kubelet[2931]: I1101 00:38:27.883930 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db29291-fdaa-41bb-9242-99edb09b98bd-config-volume\") pod \"coredns-668d6bf9bc-cxprb\" (UID: \"7db29291-fdaa-41bb-9242-99edb09b98bd\") " pod="kube-system/coredns-668d6bf9bc-cxprb" Nov 1 00:38:27.884097 kubelet[2931]: I1101 00:38:27.883967 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9lx\" (UniqueName: \"kubernetes.io/projected/7db29291-fdaa-41bb-9242-99edb09b98bd-kube-api-access-cw9lx\") pod \"coredns-668d6bf9bc-cxprb\" (UID: \"7db29291-fdaa-41bb-9242-99edb09b98bd\") " pod="kube-system/coredns-668d6bf9bc-cxprb" Nov 1 00:38:27.884812 kubelet[2931]: I1101 00:38:27.884018 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882832a0-46d3-43b4-82bb-ea5df649d892-goldmane-ca-bundle\") pod \"goldmane-666569f655-dh7wf\" (UID: \"882832a0-46d3-43b4-82bb-ea5df649d892\") " pod="calico-system/goldmane-666569f655-dh7wf" Nov 1 00:38:27.884812 kubelet[2931]: I1101 00:38:27.884048 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4519991b-c9e6-4c9d-9f5b-daa009fd2509-config-volume\") pod \"coredns-668d6bf9bc-26pbk\" (UID: \"4519991b-c9e6-4c9d-9f5b-daa009fd2509\") " pod="kube-system/coredns-668d6bf9bc-26pbk" Nov 1 00:38:27.884812 kubelet[2931]: I1101 00:38:27.884077 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/882832a0-46d3-43b4-82bb-ea5df649d892-config\") pod \"goldmane-666569f655-dh7wf\" (UID: \"882832a0-46d3-43b4-82bb-ea5df649d892\") " pod="calico-system/goldmane-666569f655-dh7wf" Nov 1 00:38:27.916938 systemd[1]: Created slice kubepods-besteffort-pod882832a0_46d3_43b4_82bb_ea5df649d892.slice - libcontainer container kubepods-besteffort-pod882832a0_46d3_43b4_82bb_ea5df649d892.slice. Nov 1 00:38:27.932056 systemd[1]: Created slice kubepods-burstable-pod7db29291_fdaa_41bb_9242_99edb09b98bd.slice - libcontainer container kubepods-burstable-pod7db29291_fdaa_41bb_9242_99edb09b98bd.slice. Nov 1 00:38:27.945446 systemd[1]: Created slice kubepods-besteffort-pod088c10a7_c984_414d_8ce9_2ebf449685e8.slice - libcontainer container kubepods-besteffort-pod088c10a7_c984_414d_8ce9_2ebf449685e8.slice. Nov 1 00:38:27.960073 systemd[1]: Created slice kubepods-besteffort-podaae018ce_9d35_415b_9be9_2f54c95ef40f.slice - libcontainer container kubepods-besteffort-podaae018ce_9d35_415b_9be9_2f54c95ef40f.slice. Nov 1 00:38:27.980192 systemd[1]: Created slice kubepods-besteffort-pod7b986655_34d6_4a0c_a36f_8538fa8da4e5.slice - libcontainer container kubepods-besteffort-pod7b986655_34d6_4a0c_a36f_8538fa8da4e5.slice. Nov 1 00:38:27.984645 kubelet[2931]: I1101 00:38:27.984526 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b986655-34d6-4a0c-a36f-8538fa8da4e5-tigera-ca-bundle\") pod \"calico-kube-controllers-76954f9f66-p69hm\" (UID: \"7b986655-34d6-4a0c-a36f-8538fa8da4e5\") " pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" Nov 1 00:38:27.984645 kubelet[2931]: I1101 00:38:27.984625 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsmnn\" (UniqueName: \"kubernetes.io/projected/088c10a7-c984-414d-8ce9-2ebf449685e8-kube-api-access-rsmnn\") pod \"whisker-6778b5489c-8tqzw\" (UID: \"088c10a7-c984-414d-8ce9-2ebf449685e8\") " pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:27.985152 kubelet[2931]: I1101 00:38:27.984668 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxbrx\" (UniqueName: \"kubernetes.io/projected/aae018ce-9d35-415b-9be9-2f54c95ef40f-kube-api-access-pxbrx\") pod \"calico-apiserver-798685d547-c7jpd\" (UID: \"aae018ce-9d35-415b-9be9-2f54c95ef40f\") " pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" Nov 1 00:38:27.985152 kubelet[2931]: I1101 00:38:27.984734 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-backend-key-pair\") pod \"whisker-6778b5489c-8tqzw\" (UID: \"088c10a7-c984-414d-8ce9-2ebf449685e8\") " pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:27.985152 kubelet[2931]: I1101 00:38:27.984784 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7g8q\" (UniqueName: \"kubernetes.io/projected/7b986655-34d6-4a0c-a36f-8538fa8da4e5-kube-api-access-g7g8q\") pod \"calico-kube-controllers-76954f9f66-p69hm\" (UID: \"7b986655-34d6-4a0c-a36f-8538fa8da4e5\") " pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" Nov 1 00:38:27.985152 kubelet[2931]: I1101 00:38:27.984814 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-ca-bundle\") pod \"whisker-6778b5489c-8tqzw\" (UID: \"088c10a7-c984-414d-8ce9-2ebf449685e8\") " pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:27.985152 kubelet[2931]: I1101 00:38:27.984910 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kst54\" (UniqueName: \"kubernetes.io/projected/9347775b-36ca-4333-aa6c-bfa61a2002e5-kube-api-access-kst54\") pod \"calico-apiserver-798685d547-5ft7v\" (UID: \"9347775b-36ca-4333-aa6c-bfa61a2002e5\") " pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" Nov 1 00:38:27.986060 kubelet[2931]: I1101 00:38:27.984946 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aae018ce-9d35-415b-9be9-2f54c95ef40f-calico-apiserver-certs\") pod \"calico-apiserver-798685d547-c7jpd\" (UID: \"aae018ce-9d35-415b-9be9-2f54c95ef40f\") " pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" Nov 1 00:38:27.986060 kubelet[2931]: I1101 00:38:27.985010 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9347775b-36ca-4333-aa6c-bfa61a2002e5-calico-apiserver-certs\") pod \"calico-apiserver-798685d547-5ft7v\" (UID: \"9347775b-36ca-4333-aa6c-bfa61a2002e5\") " pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" Nov 1 00:38:27.992746 systemd[1]: Created slice kubepods-besteffort-pod9347775b_36ca_4333_aa6c_bfa61a2002e5.slice - libcontainer container kubepods-besteffort-pod9347775b_36ca_4333_aa6c_bfa61a2002e5.slice. Nov 1 00:38:28.005327 systemd[1]: Created slice kubepods-besteffort-pod12234797_91a4_4e56_83d9_8fb50717e71b.slice - libcontainer container kubepods-besteffort-pod12234797_91a4_4e56_83d9_8fb50717e71b.slice. Nov 1 00:38:28.037272 containerd[1648]: time="2025-11-01T00:38:28.037054850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbrtf,Uid:12234797-91a4-4e56-83d9-8fb50717e71b,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:28.241803 containerd[1648]: time="2025-11-01T00:38:28.240673124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dh7wf,Uid:882832a0-46d3-43b4-82bb-ea5df649d892,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:28.241803 containerd[1648]: time="2025-11-01T00:38:28.241639510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26pbk,Uid:4519991b-c9e6-4c9d-9f5b-daa009fd2509,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:28.243193 containerd[1648]: time="2025-11-01T00:38:28.241828723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxprb,Uid:7db29291-fdaa-41bb-9242-99edb09b98bd,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:28.257694 containerd[1648]: time="2025-11-01T00:38:28.257625443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6778b5489c-8tqzw,Uid:088c10a7-c984-414d-8ce9-2ebf449685e8,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:28.276887 containerd[1648]: time="2025-11-01T00:38:28.276740997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-c7jpd,Uid:aae018ce-9d35-415b-9be9-2f54c95ef40f,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:38:28.288274 containerd[1648]: time="2025-11-01T00:38:28.288206442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76954f9f66-p69hm,Uid:7b986655-34d6-4a0c-a36f-8538fa8da4e5,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:28.300108 containerd[1648]: time="2025-11-01T00:38:28.300071918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-5ft7v,Uid:9347775b-36ca-4333-aa6c-bfa61a2002e5,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:38:28.328914 containerd[1648]: time="2025-11-01T00:38:28.328867697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:38:28.617726 containerd[1648]: time="2025-11-01T00:38:28.617664893Z" level=error msg="Failed to destroy network for sandbox \"8be629dcf9649dc1da99e8206fea091f4cb49c83b138a4e5c76c0db6e8637f6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.622039 containerd[1648]: time="2025-11-01T00:38:28.621926263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbrtf,Uid:12234797-91a4-4e56-83d9-8fb50717e71b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be629dcf9649dc1da99e8206fea091f4cb49c83b138a4e5c76c0db6e8637f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.624757 kubelet[2931]: E1101 00:38:28.624676 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be629dcf9649dc1da99e8206fea091f4cb49c83b138a4e5c76c0db6e8637f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.626077 kubelet[2931]: E1101 00:38:28.625530 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be629dcf9649dc1da99e8206fea091f4cb49c83b138a4e5c76c0db6e8637f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:28.626077 kubelet[2931]: E1101 00:38:28.625579 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8be629dcf9649dc1da99e8206fea091f4cb49c83b138a4e5c76c0db6e8637f6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rbrtf" Nov 1 00:38:28.626077 kubelet[2931]: E1101 00:38:28.625651 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8be629dcf9649dc1da99e8206fea091f4cb49c83b138a4e5c76c0db6e8637f6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:28.663671 containerd[1648]: time="2025-11-01T00:38:28.663477092Z" level=error msg="Failed to destroy network for sandbox \"b7bb3847e831235df04aa6b110bcd8cfb5a2a15a92e8e1c40d43793b06e81681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.671185 containerd[1648]: time="2025-11-01T00:38:28.671018745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dh7wf,Uid:882832a0-46d3-43b4-82bb-ea5df649d892,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7bb3847e831235df04aa6b110bcd8cfb5a2a15a92e8e1c40d43793b06e81681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.671566 kubelet[2931]: E1101 00:38:28.671514 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7bb3847e831235df04aa6b110bcd8cfb5a2a15a92e8e1c40d43793b06e81681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.671781 kubelet[2931]: E1101 00:38:28.671743 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7bb3847e831235df04aa6b110bcd8cfb5a2a15a92e8e1c40d43793b06e81681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dh7wf" Nov 1 00:38:28.673390 kubelet[2931]: E1101 00:38:28.671927 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7bb3847e831235df04aa6b110bcd8cfb5a2a15a92e8e1c40d43793b06e81681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dh7wf" Nov 1 00:38:28.673532 containerd[1648]: time="2025-11-01T00:38:28.672044845Z" level=error msg="Failed to destroy network for sandbox \"a392bfee98941c4db54146655e0657c3f9d506cb61910f9f7e4ae19d3117dae9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.673831 kubelet[2931]: E1101 00:38:28.673694 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dh7wf_calico-system(882832a0-46d3-43b4-82bb-ea5df649d892)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dh7wf_calico-system(882832a0-46d3-43b4-82bb-ea5df649d892)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7bb3847e831235df04aa6b110bcd8cfb5a2a15a92e8e1c40d43793b06e81681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:38:28.675055 containerd[1648]: time="2025-11-01T00:38:28.674985563Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76954f9f66-p69hm,Uid:7b986655-34d6-4a0c-a36f-8538fa8da4e5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a392bfee98941c4db54146655e0657c3f9d506cb61910f9f7e4ae19d3117dae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.675573 kubelet[2931]: E1101 00:38:28.675528 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a392bfee98941c4db54146655e0657c3f9d506cb61910f9f7e4ae19d3117dae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.675647 kubelet[2931]: E1101 00:38:28.675599 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a392bfee98941c4db54146655e0657c3f9d506cb61910f9f7e4ae19d3117dae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" Nov 1 00:38:28.675647 kubelet[2931]: E1101 00:38:28.675629 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a392bfee98941c4db54146655e0657c3f9d506cb61910f9f7e4ae19d3117dae9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" Nov 1 00:38:28.675742 kubelet[2931]: E1101 00:38:28.675670 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76954f9f66-p69hm_calico-system(7b986655-34d6-4a0c-a36f-8538fa8da4e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76954f9f66-p69hm_calico-system(7b986655-34d6-4a0c-a36f-8538fa8da4e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a392bfee98941c4db54146655e0657c3f9d506cb61910f9f7e4ae19d3117dae9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:38:28.691877 containerd[1648]: time="2025-11-01T00:38:28.691418916Z" level=error msg="Failed to destroy network for sandbox \"67d6e9a47b1f3ea6daf790f46effb84a881f902511670053f8d5f731f83af4d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.694677 containerd[1648]: time="2025-11-01T00:38:28.694621184Z" level=error msg="Failed to destroy network for sandbox \"e7f010c00957c7e259fa79c2721d23c1b4e5652f4aa3bf8bdf4fa0a2a58ef913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.697884 containerd[1648]: time="2025-11-01T00:38:28.697816539Z" level=error msg="Failed to destroy network for sandbox \"c55d28df8b46e896d44e051022a928369b28f83b46c5b290528b1ada086bb424\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.698316 containerd[1648]: time="2025-11-01T00:38:28.698271418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26pbk,Uid:4519991b-c9e6-4c9d-9f5b-daa009fd2509,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d6e9a47b1f3ea6daf790f46effb84a881f902511670053f8d5f731f83af4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.699021 kubelet[2931]: E1101 00:38:28.698843 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d6e9a47b1f3ea6daf790f46effb84a881f902511670053f8d5f731f83af4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.699021 kubelet[2931]: E1101 00:38:28.698951 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d6e9a47b1f3ea6daf790f46effb84a881f902511670053f8d5f731f83af4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-26pbk" Nov 1 00:38:28.699021 kubelet[2931]: E1101 00:38:28.698981 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67d6e9a47b1f3ea6daf790f46effb84a881f902511670053f8d5f731f83af4d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-26pbk" Nov 1 00:38:28.699214 kubelet[2931]: E1101 00:38:28.699043 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-26pbk_kube-system(4519991b-c9e6-4c9d-9f5b-daa009fd2509)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-26pbk_kube-system(4519991b-c9e6-4c9d-9f5b-daa009fd2509)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67d6e9a47b1f3ea6daf790f46effb84a881f902511670053f8d5f731f83af4d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-26pbk" podUID="4519991b-c9e6-4c9d-9f5b-daa009fd2509" Nov 1 00:38:28.700666 containerd[1648]: time="2025-11-01T00:38:28.700592922Z" level=error msg="Failed to destroy network for sandbox \"4cc5b2e4cf3e3ef424beebabbe73e88ccf007bc8ab826cca8c6651505016c5ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.701170 containerd[1648]: time="2025-11-01T00:38:28.701126302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-c7jpd,Uid:aae018ce-9d35-415b-9be9-2f54c95ef40f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f010c00957c7e259fa79c2721d23c1b4e5652f4aa3bf8bdf4fa0a2a58ef913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.701702 kubelet[2931]: E1101 00:38:28.701456 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f010c00957c7e259fa79c2721d23c1b4e5652f4aa3bf8bdf4fa0a2a58ef913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.701792 kubelet[2931]: E1101 00:38:28.701705 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f010c00957c7e259fa79c2721d23c1b4e5652f4aa3bf8bdf4fa0a2a58ef913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" Nov 1 00:38:28.701861 kubelet[2931]: E1101 00:38:28.701810 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f010c00957c7e259fa79c2721d23c1b4e5652f4aa3bf8bdf4fa0a2a58ef913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" Nov 1 00:38:28.702022 kubelet[2931]: E1101 00:38:28.701933 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7f010c00957c7e259fa79c2721d23c1b4e5652f4aa3bf8bdf4fa0a2a58ef913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:38:28.711125 containerd[1648]: time="2025-11-01T00:38:28.702086698Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxprb,Uid:7db29291-fdaa-41bb-9242-99edb09b98bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d28df8b46e896d44e051022a928369b28f83b46c5b290528b1ada086bb424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.711777 containerd[1648]: time="2025-11-01T00:38:28.710967405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6778b5489c-8tqzw,Uid:088c10a7-c984-414d-8ce9-2ebf449685e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc5b2e4cf3e3ef424beebabbe73e88ccf007bc8ab826cca8c6651505016c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.711874 kubelet[2931]: E1101 00:38:28.711308 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d28df8b46e896d44e051022a928369b28f83b46c5b290528b1ada086bb424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.711874 kubelet[2931]: E1101 00:38:28.711354 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc5b2e4cf3e3ef424beebabbe73e88ccf007bc8ab826cca8c6651505016c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.711874 kubelet[2931]: E1101 00:38:28.711393 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d28df8b46e896d44e051022a928369b28f83b46c5b290528b1ada086bb424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cxprb" Nov 1 00:38:28.711874 kubelet[2931]: E1101 00:38:28.711392 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc5b2e4cf3e3ef424beebabbe73e88ccf007bc8ab826cca8c6651505016c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:28.712070 kubelet[2931]: E1101 00:38:28.711424 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c55d28df8b46e896d44e051022a928369b28f83b46c5b290528b1ada086bb424\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cxprb" Nov 1 00:38:28.712070 kubelet[2931]: E1101 00:38:28.711494 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc5b2e4cf3e3ef424beebabbe73e88ccf007bc8ab826cca8c6651505016c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:28.712070 kubelet[2931]: E1101 00:38:28.711538 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6778b5489c-8tqzw_calico-system(088c10a7-c984-414d-8ce9-2ebf449685e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6778b5489c-8tqzw_calico-system(088c10a7-c984-414d-8ce9-2ebf449685e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cc5b2e4cf3e3ef424beebabbe73e88ccf007bc8ab826cca8c6651505016c5ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6778b5489c-8tqzw" podUID="088c10a7-c984-414d-8ce9-2ebf449685e8" Nov 1 00:38:28.712234 kubelet[2931]: E1101 00:38:28.711514 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cxprb_kube-system(7db29291-fdaa-41bb-9242-99edb09b98bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cxprb_kube-system(7db29291-fdaa-41bb-9242-99edb09b98bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c55d28df8b46e896d44e051022a928369b28f83b46c5b290528b1ada086bb424\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cxprb" podUID="7db29291-fdaa-41bb-9242-99edb09b98bd" Nov 1 00:38:28.729128 containerd[1648]: time="2025-11-01T00:38:28.729075106Z" level=error msg="Failed to destroy network for sandbox \"844d0057da8671b9c3cba8cbbdc1238164661727bb621d37b15b6fefdf7a70db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.730637 containerd[1648]: time="2025-11-01T00:38:28.730595822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-5ft7v,Uid:9347775b-36ca-4333-aa6c-bfa61a2002e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"844d0057da8671b9c3cba8cbbdc1238164661727bb621d37b15b6fefdf7a70db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.731001 kubelet[2931]: E1101 00:38:28.730921 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844d0057da8671b9c3cba8cbbdc1238164661727bb621d37b15b6fefdf7a70db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:28.731080 kubelet[2931]: E1101 00:38:28.731019 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844d0057da8671b9c3cba8cbbdc1238164661727bb621d37b15b6fefdf7a70db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" Nov 1 00:38:28.731449 kubelet[2931]: E1101 00:38:28.731079 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"844d0057da8671b9c3cba8cbbdc1238164661727bb621d37b15b6fefdf7a70db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" Nov 1 00:38:28.731449 kubelet[2931]: E1101 00:38:28.731171 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-798685d547-5ft7v_calico-apiserver(9347775b-36ca-4333-aa6c-bfa61a2002e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-798685d547-5ft7v_calico-apiserver(9347775b-36ca-4333-aa6c-bfa61a2002e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"844d0057da8671b9c3cba8cbbdc1238164661727bb621d37b15b6fefdf7a70db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:38:28.805584 systemd[1]: run-netns-cni\x2d4a9ee75d\x2dffae\x2d3f26\x2d6be9\x2de0f7217f7938.mount: Deactivated successfully. Nov 1 00:38:38.534538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2937551917.mount: Deactivated successfully. Nov 1 00:38:38.621235 containerd[1648]: time="2025-11-01T00:38:38.613275455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:38.623097 containerd[1648]: time="2025-11-01T00:38:38.617264929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:38:38.649989 containerd[1648]: time="2025-11-01T00:38:38.648866721Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:38.649989 containerd[1648]: time="2025-11-01T00:38:38.649829887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.320888782s" Nov 1 00:38:38.649989 containerd[1648]: time="2025-11-01T00:38:38.649870606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:38:38.653376 containerd[1648]: time="2025-11-01T00:38:38.653340330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:38:38.702072 containerd[1648]: time="2025-11-01T00:38:38.702026974Z" level=info msg="CreateContainer within sandbox \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:38:38.788664 containerd[1648]: time="2025-11-01T00:38:38.784455715Z" level=info msg="Container e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:38:38.789496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789943396.mount: Deactivated successfully. Nov 1 00:38:38.858784 containerd[1648]: time="2025-11-01T00:38:38.858718493Z" level=info msg="CreateContainer within sandbox \"682d613380cb4e02465ee86a3abfe18b4b1e8a209ee2aee506d92b2414efdf7f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\"" Nov 1 00:38:38.860719 containerd[1648]: time="2025-11-01T00:38:38.860686320Z" level=info msg="StartContainer for \"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\"" Nov 1 00:38:38.878656 containerd[1648]: time="2025-11-01T00:38:38.878576760Z" level=info msg="connecting to shim e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27" address="unix:///run/containerd/s/1635aa319885cd3a86e21b15bdc08dc832079cf1c8f05544bd6fb04266ba5a34" protocol=ttrpc version=3 Nov 1 00:38:38.920078 containerd[1648]: time="2025-11-01T00:38:38.919679868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-c7jpd,Uid:aae018ce-9d35-415b-9be9-2f54c95ef40f,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:38:38.920782 containerd[1648]: time="2025-11-01T00:38:38.920516918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6778b5489c-8tqzw,Uid:088c10a7-c984-414d-8ce9-2ebf449685e8,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:38.930027 systemd[1]: Started cri-containerd-e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27.scope - libcontainer container e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27. Nov 1 00:38:39.059417 containerd[1648]: time="2025-11-01T00:38:39.059155798Z" level=info msg="StartContainer for \"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\" returns successfully" Nov 1 00:38:39.068553 containerd[1648]: time="2025-11-01T00:38:39.067947361Z" level=error msg="Failed to destroy network for sandbox \"ad162aa0d315a06e49f930f1938dbce5d152e751aebfa5caeeeaa1e2e6f4ce91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:39.072432 containerd[1648]: time="2025-11-01T00:38:39.072386651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-c7jpd,Uid:aae018ce-9d35-415b-9be9-2f54c95ef40f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad162aa0d315a06e49f930f1938dbce5d152e751aebfa5caeeeaa1e2e6f4ce91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:39.073699 kubelet[2931]: E1101 00:38:39.073656 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad162aa0d315a06e49f930f1938dbce5d152e751aebfa5caeeeaa1e2e6f4ce91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:39.074378 kubelet[2931]: E1101 00:38:39.074254 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad162aa0d315a06e49f930f1938dbce5d152e751aebfa5caeeeaa1e2e6f4ce91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" Nov 1 00:38:39.074378 kubelet[2931]: E1101 00:38:39.074315 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad162aa0d315a06e49f930f1938dbce5d152e751aebfa5caeeeaa1e2e6f4ce91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" Nov 1 00:38:39.074933 kubelet[2931]: E1101 00:38:39.074598 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad162aa0d315a06e49f930f1938dbce5d152e751aebfa5caeeeaa1e2e6f4ce91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:38:39.120964 containerd[1648]: time="2025-11-01T00:38:39.120803186Z" level=error msg="Failed to destroy network for sandbox \"3fc4920c282dfaa7f0d3b1c837d9d5a784e445650aa03d91b65638b6640f8f23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:39.123016 containerd[1648]: time="2025-11-01T00:38:39.122932646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6778b5489c-8tqzw,Uid:088c10a7-c984-414d-8ce9-2ebf449685e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc4920c282dfaa7f0d3b1c837d9d5a784e445650aa03d91b65638b6640f8f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:39.124003 kubelet[2931]: E1101 00:38:39.123945 2931 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc4920c282dfaa7f0d3b1c837d9d5a784e445650aa03d91b65638b6640f8f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:38:39.124168 kubelet[2931]: E1101 00:38:39.124055 2931 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc4920c282dfaa7f0d3b1c837d9d5a784e445650aa03d91b65638b6640f8f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:39.124168 kubelet[2931]: E1101 00:38:39.124090 2931 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fc4920c282dfaa7f0d3b1c837d9d5a784e445650aa03d91b65638b6640f8f23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6778b5489c-8tqzw" Nov 1 00:38:39.124394 kubelet[2931]: E1101 00:38:39.124173 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6778b5489c-8tqzw_calico-system(088c10a7-c984-414d-8ce9-2ebf449685e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6778b5489c-8tqzw_calico-system(088c10a7-c984-414d-8ce9-2ebf449685e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fc4920c282dfaa7f0d3b1c837d9d5a784e445650aa03d91b65638b6640f8f23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6778b5489c-8tqzw" podUID="088c10a7-c984-414d-8ce9-2ebf449685e8" Nov 1 00:38:39.509583 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:38:39.515412 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:38:39.969889 kubelet[2931]: I1101 00:38:39.969689 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j4znc" podStartSLOduration=2.011013613 podStartE2EDuration="25.969574914s" podCreationTimestamp="2025-11-01 00:38:14 +0000 UTC" firstStartedPulling="2025-11-01 00:38:14.692350699 +0000 UTC m=+23.076985088" lastFinishedPulling="2025-11-01 00:38:38.650911995 +0000 UTC m=+47.035546389" observedRunningTime="2025-11-01 00:38:39.40714023 +0000 UTC m=+47.791774659" watchObservedRunningTime="2025-11-01 00:38:39.969574914 +0000 UTC m=+48.354209320" Nov 1 00:38:40.183066 kubelet[2931]: I1101 00:38:40.182617 2931 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-backend-key-pair\") pod \"088c10a7-c984-414d-8ce9-2ebf449685e8\" (UID: \"088c10a7-c984-414d-8ce9-2ebf449685e8\") " Nov 1 00:38:40.183066 kubelet[2931]: I1101 00:38:40.182952 2931 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsmnn\" (UniqueName: \"kubernetes.io/projected/088c10a7-c984-414d-8ce9-2ebf449685e8-kube-api-access-rsmnn\") pod \"088c10a7-c984-414d-8ce9-2ebf449685e8\" (UID: \"088c10a7-c984-414d-8ce9-2ebf449685e8\") " Nov 1 00:38:40.183066 kubelet[2931]: I1101 00:38:40.183068 2931 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-ca-bundle\") pod \"088c10a7-c984-414d-8ce9-2ebf449685e8\" (UID: \"088c10a7-c984-414d-8ce9-2ebf449685e8\") " Nov 1 00:38:40.189786 kubelet[2931]: I1101 00:38:40.188648 2931 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "088c10a7-c984-414d-8ce9-2ebf449685e8" (UID: "088c10a7-c984-414d-8ce9-2ebf449685e8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:38:40.207835 systemd[1]: var-lib-kubelet-pods-088c10a7\x2dc984\x2d414d\x2d8ce9\x2d2ebf449685e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drsmnn.mount: Deactivated successfully. Nov 1 00:38:40.208014 systemd[1]: var-lib-kubelet-pods-088c10a7\x2dc984\x2d414d\x2d8ce9\x2d2ebf449685e8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:38:40.209624 kubelet[2931]: I1101 00:38:40.208732 2931 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "088c10a7-c984-414d-8ce9-2ebf449685e8" (UID: "088c10a7-c984-414d-8ce9-2ebf449685e8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:38:40.209624 kubelet[2931]: I1101 00:38:40.209204 2931 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088c10a7-c984-414d-8ce9-2ebf449685e8-kube-api-access-rsmnn" (OuterVolumeSpecName: "kube-api-access-rsmnn") pod "088c10a7-c984-414d-8ce9-2ebf449685e8" (UID: "088c10a7-c984-414d-8ce9-2ebf449685e8"). InnerVolumeSpecName "kube-api-access-rsmnn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:38:40.244815 containerd[1648]: time="2025-11-01T00:38:40.244617373Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\" id:\"a96e8c3e4e5d36bb2002786e0651c3142193999533584f4619cbf11823fad17b\" pid:4031 exit_status:1 exited_at:{seconds:1761957520 nanos:243874133}" Nov 1 00:38:40.285135 kubelet[2931]: I1101 00:38:40.284071 2931 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-backend-key-pair\") on node \"srv-nthov.gb1.brightbox.com\" DevicePath \"\"" Nov 1 00:38:40.285135 kubelet[2931]: I1101 00:38:40.284131 2931 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rsmnn\" (UniqueName: \"kubernetes.io/projected/088c10a7-c984-414d-8ce9-2ebf449685e8-kube-api-access-rsmnn\") on node \"srv-nthov.gb1.brightbox.com\" DevicePath \"\"" Nov 1 00:38:40.285135 kubelet[2931]: I1101 00:38:40.284149 2931 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/088c10a7-c984-414d-8ce9-2ebf449685e8-whisker-ca-bundle\") on node \"srv-nthov.gb1.brightbox.com\" DevicePath \"\"" Nov 1 00:38:40.375975 systemd[1]: Removed slice kubepods-besteffort-pod088c10a7_c984_414d_8ce9_2ebf449685e8.slice - libcontainer container kubepods-besteffort-pod088c10a7_c984_414d_8ce9_2ebf449685e8.slice. Nov 1 00:38:40.499134 systemd[1]: Created slice kubepods-besteffort-podbfbb9d24_fe82_4293_8037_a6b00d156a26.slice - libcontainer container kubepods-besteffort-podbfbb9d24_fe82_4293_8037_a6b00d156a26.slice. Nov 1 00:38:40.588534 kubelet[2931]: I1101 00:38:40.587547 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfbb9d24-fe82-4293-8037-a6b00d156a26-whisker-ca-bundle\") pod \"whisker-6645fc67d5-m78ls\" (UID: \"bfbb9d24-fe82-4293-8037-a6b00d156a26\") " pod="calico-system/whisker-6645fc67d5-m78ls" Nov 1 00:38:40.588912 kubelet[2931]: I1101 00:38:40.588884 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bfbb9d24-fe82-4293-8037-a6b00d156a26-whisker-backend-key-pair\") pod \"whisker-6645fc67d5-m78ls\" (UID: \"bfbb9d24-fe82-4293-8037-a6b00d156a26\") " pod="calico-system/whisker-6645fc67d5-m78ls" Nov 1 00:38:40.589447 kubelet[2931]: I1101 00:38:40.589112 2931 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfvx\" (UniqueName: \"kubernetes.io/projected/bfbb9d24-fe82-4293-8037-a6b00d156a26-kube-api-access-5rfvx\") pod \"whisker-6645fc67d5-m78ls\" (UID: \"bfbb9d24-fe82-4293-8037-a6b00d156a26\") " pod="calico-system/whisker-6645fc67d5-m78ls" Nov 1 00:38:40.810192 containerd[1648]: time="2025-11-01T00:38:40.809675068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6645fc67d5-m78ls,Uid:bfbb9d24-fe82-4293-8037-a6b00d156a26,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:40.873092 containerd[1648]: time="2025-11-01T00:38:40.872992811Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\" id:\"6fe87502ed06378e50efbc1534b6e2a132216f28c90a04e5d028ca57379bb951\" pid:4062 exit_status:1 exited_at:{seconds:1761957520 nanos:869324784}" Nov 1 00:38:40.921364 containerd[1648]: time="2025-11-01T00:38:40.921158428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-5ft7v,Uid:9347775b-36ca-4333-aa6c-bfa61a2002e5,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:38:41.250278 systemd-networkd[1568]: calidd62893f5bf: Link UP Nov 1 00:38:41.252675 systemd-networkd[1568]: calidd62893f5bf: Gained carrier Nov 1 00:38:41.271300 containerd[1648]: 2025-11-01 00:38:40.888 [INFO][4087] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:38:41.271300 containerd[1648]: 2025-11-01 00:38:40.942 [INFO][4087] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0 whisker-6645fc67d5- calico-system bfbb9d24-fe82-4293-8037-a6b00d156a26 935 0 2025-11-01 00:38:40 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6645fc67d5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com whisker-6645fc67d5-m78ls eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidd62893f5bf [] [] }} ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-" Nov 1 00:38:41.271300 containerd[1648]: 2025-11-01 00:38:40.943 [INFO][4087] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.271300 containerd[1648]: 2025-11-01 00:38:41.153 [INFO][4107] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" HandleID="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Workload="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.155 [INFO][4107] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" HandleID="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Workload="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002bcfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nthov.gb1.brightbox.com", "pod":"whisker-6645fc67d5-m78ls", "timestamp":"2025-11-01 00:38:41.153150111 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.155 [INFO][4107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.155 [INFO][4107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.156 [INFO][4107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.172 [INFO][4107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.185 [INFO][4107] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.192 [INFO][4107] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.194 [INFO][4107] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.272248 containerd[1648]: 2025-11-01 00:38:41.197 [INFO][4107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.197 [INFO][4107] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.200 [INFO][4107] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960 Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.208 [INFO][4107] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.215 [INFO][4107] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.1/26] block=192.168.27.0/26 handle="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.215 [INFO][4107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.1/26] handle="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.215 [INFO][4107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:41.274804 containerd[1648]: 2025-11-01 00:38:41.215 [INFO][4107] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.1/26] IPv6=[] ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" HandleID="k8s-pod-network.e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Workload="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.276570 containerd[1648]: 2025-11-01 00:38:41.221 [INFO][4087] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0", GenerateName:"whisker-6645fc67d5-", Namespace:"calico-system", SelfLink:"", UID:"bfbb9d24-fe82-4293-8037-a6b00d156a26", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6645fc67d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6645fc67d5-m78ls", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.27.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd62893f5bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:41.276570 containerd[1648]: 2025-11-01 00:38:41.221 [INFO][4087] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.1/32] ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.276731 containerd[1648]: 2025-11-01 00:38:41.221 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd62893f5bf ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.276731 containerd[1648]: 2025-11-01 00:38:41.243 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.276822 containerd[1648]: 2025-11-01 00:38:41.243 [INFO][4087] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0", GenerateName:"whisker-6645fc67d5-", Namespace:"calico-system", SelfLink:"", UID:"bfbb9d24-fe82-4293-8037-a6b00d156a26", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6645fc67d5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960", Pod:"whisker-6645fc67d5-m78ls", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.27.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidd62893f5bf", MAC:"8a:b2:68:a5:36:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:41.276908 containerd[1648]: 2025-11-01 00:38:41.264 [INFO][4087] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" Namespace="calico-system" Pod="whisker-6645fc67d5-m78ls" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-whisker--6645fc67d5--m78ls-eth0" Nov 1 00:38:41.379461 systemd-networkd[1568]: calif5b90c331f9: Link UP Nov 1 00:38:41.382159 systemd-networkd[1568]: calif5b90c331f9: Gained carrier Nov 1 00:38:41.416528 containerd[1648]: 2025-11-01 00:38:40.972 [INFO][4097] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:38:41.416528 containerd[1648]: 2025-11-01 00:38:40.995 [INFO][4097] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0 calico-apiserver-798685d547- calico-apiserver 9347775b-36ca-4333-aa6c-bfa61a2002e5 847 0 2025-11-01 00:38:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:798685d547 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com calico-apiserver-798685d547-5ft7v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif5b90c331f9 [] [] }} ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-" Nov 1 00:38:41.416528 containerd[1648]: 2025-11-01 00:38:40.995 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.416528 containerd[1648]: 2025-11-01 00:38:41.153 [INFO][4114] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" HandleID="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.154 [INFO][4114] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" HandleID="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-nthov.gb1.brightbox.com", "pod":"calico-apiserver-798685d547-5ft7v", "timestamp":"2025-11-01 00:38:41.153157613 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.155 [INFO][4114] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.216 [INFO][4114] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.216 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.278 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.294 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.304 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.309 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.416897 containerd[1648]: 2025-11-01 00:38:41.312 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.313 [INFO][4114] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.315 [INFO][4114] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562 Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.321 [INFO][4114] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.341 [INFO][4114] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.2/26] block=192.168.27.0/26 handle="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.341 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.2/26] handle="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.342 [INFO][4114] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:41.417411 containerd[1648]: 2025-11-01 00:38:41.348 [INFO][4114] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.2/26] IPv6=[] ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" HandleID="k8s-pod-network.84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.421124 containerd[1648]: 2025-11-01 00:38:41.358 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0", GenerateName:"calico-apiserver-798685d547-", Namespace:"calico-apiserver", SelfLink:"", UID:"9347775b-36ca-4333-aa6c-bfa61a2002e5", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798685d547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-798685d547-5ft7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5b90c331f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:41.421238 containerd[1648]: 2025-11-01 00:38:41.358 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.2/32] ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.421238 containerd[1648]: 2025-11-01 00:38:41.358 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5b90c331f9 ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.421238 containerd[1648]: 2025-11-01 00:38:41.384 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.421390 containerd[1648]: 2025-11-01 00:38:41.385 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0", GenerateName:"calico-apiserver-798685d547-", Namespace:"calico-apiserver", SelfLink:"", UID:"9347775b-36ca-4333-aa6c-bfa61a2002e5", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798685d547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562", Pod:"calico-apiserver-798685d547-5ft7v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif5b90c331f9", MAC:"92:d1:d7:68:d9:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:41.421535 containerd[1648]: 2025-11-01 00:38:41.410 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-5ft7v" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--5ft7v-eth0" Nov 1 00:38:41.533129 containerd[1648]: time="2025-11-01T00:38:41.532885825Z" level=info msg="connecting to shim e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960" address="unix:///run/containerd/s/8eab4446f5a3f7412052d6fcd0590322b7fc4ae111d3dea09358ab707a44e2f3" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:41.533467 containerd[1648]: time="2025-11-01T00:38:41.532907035Z" level=info msg="connecting to shim 84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562" address="unix:///run/containerd/s/1ea3b64cf9f399bb6febad9c79fa73c408b9d08b8eba9ca179ef47305002d1bd" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:41.592718 systemd[1]: Started cri-containerd-84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562.scope - libcontainer container 84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562. Nov 1 00:38:41.602597 systemd[1]: Started cri-containerd-e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960.scope - libcontainer container e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960. Nov 1 00:38:41.722401 containerd[1648]: time="2025-11-01T00:38:41.721992202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6645fc67d5-m78ls,Uid:bfbb9d24-fe82-4293-8037-a6b00d156a26,Namespace:calico-system,Attempt:0,} returns sandbox id \"e225ed3f8fd2afb7bcea5bb39906b874d5eb03d96548243c3aa66edeb2ed0960\"" Nov 1 00:38:41.733150 containerd[1648]: time="2025-11-01T00:38:41.732924781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:38:41.743742 containerd[1648]: time="2025-11-01T00:38:41.743280317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-5ft7v,Uid:9347775b-36ca-4333-aa6c-bfa61a2002e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"84d35250ea8b2a1efc389a4e953b1532985debe168d946f86e91f4bde061b562\"" Nov 1 00:38:41.957160 kubelet[2931]: I1101 00:38:41.956618 2931 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088c10a7-c984-414d-8ce9-2ebf449685e8" path="/var/lib/kubelet/pods/088c10a7-c984-414d-8ce9-2ebf449685e8/volumes" Nov 1 00:38:42.094554 containerd[1648]: time="2025-11-01T00:38:42.094507026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:42.097538 containerd[1648]: time="2025-11-01T00:38:42.095731923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:38:42.097538 containerd[1648]: time="2025-11-01T00:38:42.095836630Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:38:42.109131 kubelet[2931]: E1101 00:38:42.096012 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:38:42.119391 kubelet[2931]: E1101 00:38:42.109170 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:38:42.121693 containerd[1648]: time="2025-11-01T00:38:42.120734209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:38:42.132656 kubelet[2931]: E1101 00:38:42.132595 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fbbc8cc50c84aebad3a12a3980d47ed,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:42.440039 containerd[1648]: time="2025-11-01T00:38:42.439960816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:42.440931 containerd[1648]: time="2025-11-01T00:38:42.440884696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:38:42.441118 containerd[1648]: time="2025-11-01T00:38:42.441022312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:38:42.443380 kubelet[2931]: E1101 00:38:42.443290 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:38:42.443708 kubelet[2931]: E1101 00:38:42.443575 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:38:42.447798 containerd[1648]: time="2025-11-01T00:38:42.444817974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:38:42.447877 kubelet[2931]: E1101 00:38:42.444113 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kst54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-5ft7v_calico-apiserver(9347775b-36ca-4333-aa6c-bfa61a2002e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:42.451974 kubelet[2931]: E1101 00:38:42.451530 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:38:42.714470 systemd-networkd[1568]: calif5b90c331f9: Gained IPv6LL Nov 1 00:38:42.766325 containerd[1648]: time="2025-11-01T00:38:42.766229442Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:42.767960 containerd[1648]: time="2025-11-01T00:38:42.767874504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:38:42.767960 containerd[1648]: time="2025-11-01T00:38:42.767921940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:38:42.768214 kubelet[2931]: E1101 00:38:42.768156 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:38:42.768319 kubelet[2931]: E1101 00:38:42.768230 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:38:42.768433 kubelet[2931]: E1101 00:38:42.768376 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:42.770271 kubelet[2931]: E1101 00:38:42.769632 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:38:42.927178 containerd[1648]: time="2025-11-01T00:38:42.927024836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76954f9f66-p69hm,Uid:7b986655-34d6-4a0c-a36f-8538fa8da4e5,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:42.927873 containerd[1648]: time="2025-11-01T00:38:42.927607383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbrtf,Uid:12234797-91a4-4e56-83d9-8fb50717e71b,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:43.028829 systemd-networkd[1568]: calidd62893f5bf: Gained IPv6LL Nov 1 00:38:43.115646 systemd-networkd[1568]: vxlan.calico: Link UP Nov 1 00:38:43.115658 systemd-networkd[1568]: vxlan.calico: Gained carrier Nov 1 00:38:43.271728 systemd-networkd[1568]: cali53dfc06137c: Link UP Nov 1 00:38:43.274546 systemd-networkd[1568]: cali53dfc06137c: Gained carrier Nov 1 00:38:43.306077 containerd[1648]: 2025-11-01 00:38:43.028 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0 calico-kube-controllers-76954f9f66- calico-system 7b986655-34d6-4a0c-a36f-8538fa8da4e5 849 0 2025-11-01 00:38:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76954f9f66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com calico-kube-controllers-76954f9f66-p69hm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali53dfc06137c [] [] }} ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-" Nov 1 00:38:43.306077 containerd[1648]: 2025-11-01 00:38:43.028 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.306077 containerd[1648]: 2025-11-01 00:38:43.125 [INFO][4389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" HandleID="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.126 [INFO][4389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" HandleID="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nthov.gb1.brightbox.com", "pod":"calico-kube-controllers-76954f9f66-p69hm", "timestamp":"2025-11-01 00:38:43.125656727 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.126 [INFO][4389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.126 [INFO][4389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.126 [INFO][4389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.149 [INFO][4389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.159 [INFO][4389] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.167 [INFO][4389] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.171 [INFO][4389] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.307248 containerd[1648]: 2025-11-01 00:38:43.178 [INFO][4389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.179 [INFO][4389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.183 [INFO][4389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7 Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.195 [INFO][4389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.221 [INFO][4389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.3/26] block=192.168.27.0/26 handle="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.222 [INFO][4389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.3/26] handle="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.222 [INFO][4389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:43.309399 containerd[1648]: 2025-11-01 00:38:43.222 [INFO][4389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.3/26] IPv6=[] ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" HandleID="k8s-pod-network.606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.314686 containerd[1648]: 2025-11-01 00:38:43.239 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0", GenerateName:"calico-kube-controllers-76954f9f66-", Namespace:"calico-system", SelfLink:"", UID:"7b986655-34d6-4a0c-a36f-8538fa8da4e5", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76954f9f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-76954f9f66-p69hm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53dfc06137c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:43.314832 containerd[1648]: 2025-11-01 00:38:43.239 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.3/32] ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.314832 containerd[1648]: 2025-11-01 00:38:43.239 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53dfc06137c ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.314832 containerd[1648]: 2025-11-01 00:38:43.273 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.314970 containerd[1648]: 2025-11-01 00:38:43.274 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0", GenerateName:"calico-kube-controllers-76954f9f66-", Namespace:"calico-system", SelfLink:"", UID:"7b986655-34d6-4a0c-a36f-8538fa8da4e5", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76954f9f66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7", Pod:"calico-kube-controllers-76954f9f66-p69hm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53dfc06137c", MAC:"56:26:c1:3b:1b:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:43.315070 containerd[1648]: 2025-11-01 00:38:43.300 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" Namespace="calico-system" Pod="calico-kube-controllers-76954f9f66-p69hm" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--kube--controllers--76954f9f66--p69hm-eth0" Nov 1 00:38:43.353379 systemd-networkd[1568]: cali610f96cf63d: Link UP Nov 1 00:38:43.353929 systemd-networkd[1568]: cali610f96cf63d: Gained carrier Nov 1 00:38:43.399868 containerd[1648]: time="2025-11-01T00:38:43.399809841Z" level=info msg="connecting to shim 606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7" address="unix:///run/containerd/s/59d2705e78e60502fbc97e0624e24082f00d938dac54ae5cffb4a8797860affd" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:43.409965 containerd[1648]: 2025-11-01 00:38:43.052 [INFO][4366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0 csi-node-driver- calico-system 12234797-91a4-4e56-83d9-8fb50717e71b 724 0 2025-11-01 00:38:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com csi-node-driver-rbrtf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali610f96cf63d [] [] }} ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-" Nov 1 00:38:43.409965 containerd[1648]: 2025-11-01 00:38:43.053 [INFO][4366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.409965 containerd[1648]: 2025-11-01 00:38:43.165 [INFO][4395] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" HandleID="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Workload="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.169 [INFO][4395] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" HandleID="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Workload="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011e5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nthov.gb1.brightbox.com", "pod":"csi-node-driver-rbrtf", "timestamp":"2025-11-01 00:38:43.165145832 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.169 [INFO][4395] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.223 [INFO][4395] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.225 [INFO][4395] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.252 [INFO][4395] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.262 [INFO][4395] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.275 [INFO][4395] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.281 [INFO][4395] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.410256 containerd[1648]: 2025-11-01 00:38:43.287 [INFO][4395] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.288 [INFO][4395] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.294 [INFO][4395] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20 Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.319 [INFO][4395] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.339 [INFO][4395] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.4/26] block=192.168.27.0/26 handle="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.340 [INFO][4395] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.4/26] handle="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.340 [INFO][4395] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:43.412825 containerd[1648]: 2025-11-01 00:38:43.340 [INFO][4395] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.4/26] IPv6=[] ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" HandleID="k8s-pod-network.c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Workload="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.413120 containerd[1648]: 2025-11-01 00:38:43.346 [INFO][4366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12234797-91a4-4e56-83d9-8fb50717e71b", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-rbrtf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali610f96cf63d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:43.413230 containerd[1648]: 2025-11-01 00:38:43.347 [INFO][4366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.4/32] ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.413230 containerd[1648]: 2025-11-01 00:38:43.347 [INFO][4366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali610f96cf63d ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.413230 containerd[1648]: 2025-11-01 00:38:43.353 [INFO][4366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.413345 containerd[1648]: 2025-11-01 00:38:43.357 [INFO][4366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12234797-91a4-4e56-83d9-8fb50717e71b", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20", Pod:"csi-node-driver-rbrtf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali610f96cf63d", MAC:"d2:23:4a:e6:7f:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:43.413449 containerd[1648]: 2025-11-01 00:38:43.386 [INFO][4366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" Namespace="calico-system" Pod="csi-node-driver-rbrtf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-csi--node--driver--rbrtf-eth0" Nov 1 00:38:43.415948 kubelet[2931]: E1101 00:38:43.414794 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:38:43.415948 kubelet[2931]: E1101 00:38:43.415740 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:38:43.488497 containerd[1648]: time="2025-11-01T00:38:43.488121676Z" level=info msg="connecting to shim c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20" address="unix:///run/containerd/s/39d7258cb1f11e6db4df3b97c7887eb1bcd279bd5127173d257cf18e6a031a39" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:43.491463 systemd[1]: Started cri-containerd-606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7.scope - libcontainer container 606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7. Nov 1 00:38:43.563702 systemd[1]: Started cri-containerd-c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20.scope - libcontainer container c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20. Nov 1 00:38:43.671350 containerd[1648]: time="2025-11-01T00:38:43.671274800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rbrtf,Uid:12234797-91a4-4e56-83d9-8fb50717e71b,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3f485e926c993b405d9d70467c32dd865c499295a7ce8ccce262e2c9b44cf20\"" Nov 1 00:38:43.677633 containerd[1648]: time="2025-11-01T00:38:43.677594773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:38:43.755029 containerd[1648]: time="2025-11-01T00:38:43.754973132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76954f9f66-p69hm,Uid:7b986655-34d6-4a0c-a36f-8538fa8da4e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"606b3bae8d051b8e6dd51498f4be7959220e652951fe9841f0d6d36d81b1fbf7\"" Nov 1 00:38:43.920528 containerd[1648]: time="2025-11-01T00:38:43.920441107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxprb,Uid:7db29291-fdaa-41bb-9242-99edb09b98bd,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:43.921853 containerd[1648]: time="2025-11-01T00:38:43.920441192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dh7wf,Uid:882832a0-46d3-43b4-82bb-ea5df649d892,Namespace:calico-system,Attempt:0,}" Nov 1 00:38:43.924755 containerd[1648]: time="2025-11-01T00:38:43.924724985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26pbk,Uid:4519991b-c9e6-4c9d-9f5b-daa009fd2509,Namespace:kube-system,Attempt:0,}" Nov 1 00:38:44.031517 containerd[1648]: time="2025-11-01T00:38:44.031170988Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:44.032836 containerd[1648]: time="2025-11-01T00:38:44.032745253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:38:44.033042 containerd[1648]: time="2025-11-01T00:38:44.032803663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:38:44.033605 kubelet[2931]: E1101 00:38:44.033523 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:38:44.033937 kubelet[2931]: E1101 00:38:44.033672 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:38:44.034418 kubelet[2931]: E1101 00:38:44.034253 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:44.036521 containerd[1648]: time="2025-11-01T00:38:44.036046432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:38:44.314939 systemd-networkd[1568]: cali9d23e44abbb: Link UP Nov 1 00:38:44.318862 systemd-networkd[1568]: cali9d23e44abbb: Gained carrier Nov 1 00:38:44.343606 containerd[1648]: 2025-11-01 00:38:44.091 [INFO][4563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0 coredns-668d6bf9bc- kube-system 4519991b-c9e6-4c9d-9f5b-daa009fd2509 845 0 2025-11-01 00:37:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com coredns-668d6bf9bc-26pbk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9d23e44abbb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-" Nov 1 00:38:44.343606 containerd[1648]: 2025-11-01 00:38:44.092 [INFO][4563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.343606 containerd[1648]: 2025-11-01 00:38:44.240 [INFO][4603] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" HandleID="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Workload="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.241 [INFO][4603] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" HandleID="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Workload="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000359ee0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-nthov.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-26pbk", "timestamp":"2025-11-01 00:38:44.240903883 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.241 [INFO][4603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.241 [INFO][4603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.241 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.259 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.266 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.272 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.275 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.344638 containerd[1648]: 2025-11-01 00:38:44.279 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.279 [INFO][4603] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.282 [INFO][4603] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.288 [INFO][4603] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.298 [INFO][4603] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.5/26] block=192.168.27.0/26 handle="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.298 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.5/26] handle="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.299 [INFO][4603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:44.345393 containerd[1648]: 2025-11-01 00:38:44.299 [INFO][4603] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.5/26] IPv6=[] ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" HandleID="k8s-pod-network.55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Workload="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.345978 containerd[1648]: 2025-11-01 00:38:44.305 [INFO][4563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4519991b-c9e6-4c9d-9f5b-daa009fd2509", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-26pbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d23e44abbb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:44.345978 containerd[1648]: 2025-11-01 00:38:44.306 [INFO][4563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.5/32] ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.345978 containerd[1648]: 2025-11-01 00:38:44.306 [INFO][4563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d23e44abbb ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.345978 containerd[1648]: 2025-11-01 00:38:44.317 [INFO][4563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.345978 containerd[1648]: 2025-11-01 00:38:44.317 [INFO][4563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4519991b-c9e6-4c9d-9f5b-daa009fd2509", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f", Pod:"coredns-668d6bf9bc-26pbk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9d23e44abbb", MAC:"7a:52:57:52:99:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:44.345978 containerd[1648]: 2025-11-01 00:38:44.338 [INFO][4563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" Namespace="kube-system" Pod="coredns-668d6bf9bc-26pbk" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--26pbk-eth0" Nov 1 00:38:44.376007 containerd[1648]: time="2025-11-01T00:38:44.375961991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:44.378227 containerd[1648]: time="2025-11-01T00:38:44.377969108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:38:44.378998 containerd[1648]: time="2025-11-01T00:38:44.378800250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:38:44.379740 kubelet[2931]: E1101 00:38:44.379668 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:38:44.379875 kubelet[2931]: E1101 00:38:44.379748 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:38:44.386269 containerd[1648]: time="2025-11-01T00:38:44.386098104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:38:44.408043 kubelet[2931]: E1101 00:38:44.380730 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7g8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76954f9f66-p69hm_calico-system(7b986655-34d6-4a0c-a36f-8538fa8da4e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:44.409515 containerd[1648]: time="2025-11-01T00:38:44.408842536Z" level=info msg="connecting to shim 55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f" address="unix:///run/containerd/s/e0aa33b3a0c28595c6bbd63925e4664484c3683c78a00da63db64d69b7adb9ea" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:44.410813 kubelet[2931]: E1101 00:38:44.410672 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:38:44.422100 kubelet[2931]: E1101 00:38:44.421513 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:38:44.471594 systemd-networkd[1568]: cali153ba0b8e19: Link UP Nov 1 00:38:44.473437 systemd-networkd[1568]: cali153ba0b8e19: Gained carrier Nov 1 00:38:44.523059 systemd[1]: Started cri-containerd-55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f.scope - libcontainer container 55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f. Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.069 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0 coredns-668d6bf9bc- kube-system 7db29291-fdaa-41bb-9242-99edb09b98bd 851 0 2025-11-01 00:37:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com coredns-668d6bf9bc-cxprb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali153ba0b8e19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.070 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.243 [INFO][4596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" HandleID="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Workload="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.243 [INFO][4596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" HandleID="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Workload="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000320130), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-nthov.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-cxprb", "timestamp":"2025-11-01 00:38:44.243099254 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.243 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.299 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.299 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.357 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.375 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.402 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.412 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.416 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.416 [INFO][4596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.419 [INFO][4596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94 Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.434 [INFO][4596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.451 [INFO][4596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.6/26] block=192.168.27.0/26 handle="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.452 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.6/26] handle="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.453 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:44.542664 containerd[1648]: 2025-11-01 00:38:44.453 [INFO][4596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.6/26] IPv6=[] ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" HandleID="k8s-pod-network.3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Workload="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.546297 containerd[1648]: 2025-11-01 00:38:44.465 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7db29291-fdaa-41bb-9242-99edb09b98bd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-cxprb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali153ba0b8e19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:44.546297 containerd[1648]: 2025-11-01 00:38:44.465 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.6/32] ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.546297 containerd[1648]: 2025-11-01 00:38:44.465 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali153ba0b8e19 ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.546297 containerd[1648]: 2025-11-01 00:38:44.474 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.546297 containerd[1648]: 2025-11-01 00:38:44.476 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7db29291-fdaa-41bb-9242-99edb09b98bd", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94", Pod:"coredns-668d6bf9bc-cxprb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali153ba0b8e19", MAC:"62:15:cb:d2:af:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:44.546297 containerd[1648]: 2025-11-01 00:38:44.504 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" Namespace="kube-system" Pod="coredns-668d6bf9bc-cxprb" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-coredns--668d6bf9bc--cxprb-eth0" Nov 1 00:38:44.620721 containerd[1648]: time="2025-11-01T00:38:44.620661951Z" level=info msg="connecting to shim 3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94" address="unix:///run/containerd/s/0b97162f1e6274872d689030d3d10f6f124d57324614ba487e6b7073cd2de162" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:44.628631 systemd-networkd[1568]: cali53dfc06137c: Gained IPv6LL Nov 1 00:38:44.641224 systemd-networkd[1568]: calia4d35015506: Link UP Nov 1 00:38:44.644557 systemd-networkd[1568]: calia4d35015506: Gained carrier Nov 1 00:38:44.694592 systemd-networkd[1568]: cali610f96cf63d: Gained IPv6LL Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.066 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0 goldmane-666569f655- calico-system 882832a0-46d3-43b4-82bb-ea5df649d892 850 0 2025-11-01 00:38:11 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com goldmane-666569f655-dh7wf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia4d35015506 [] [] }} ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.067 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.245 [INFO][4593] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" HandleID="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Workload="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.247 [INFO][4593] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" HandleID="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Workload="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003264f0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-nthov.gb1.brightbox.com", "pod":"goldmane-666569f655-dh7wf", "timestamp":"2025-11-01 00:38:44.245888576 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.247 [INFO][4593] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.461 [INFO][4593] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.461 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.488 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.532 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.553 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.558 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.564 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.564 [INFO][4593] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.569 [INFO][4593] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6 Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.580 [INFO][4593] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.617 [INFO][4593] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.7/26] block=192.168.27.0/26 handle="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.618 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.7/26] handle="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.618 [INFO][4593] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:44.699967 containerd[1648]: 2025-11-01 00:38:44.618 [INFO][4593] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.7/26] IPv6=[] ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" HandleID="k8s-pod-network.5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Workload="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.702813 containerd[1648]: 2025-11-01 00:38:44.632 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"882832a0-46d3-43b4-82bb-ea5df649d892", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-dh7wf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.27.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia4d35015506", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:44.702813 containerd[1648]: 2025-11-01 00:38:44.633 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.7/32] ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.702813 containerd[1648]: 2025-11-01 00:38:44.633 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4d35015506 ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.702813 containerd[1648]: 2025-11-01 00:38:44.653 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.702813 containerd[1648]: 2025-11-01 00:38:44.668 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"882832a0-46d3-43b4-82bb-ea5df649d892", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6", Pod:"goldmane-666569f655-dh7wf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.27.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia4d35015506", MAC:"2a:44:bd:d2:2e:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:44.702813 containerd[1648]: 2025-11-01 00:38:44.690 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" Namespace="calico-system" Pod="goldmane-666569f655-dh7wf" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-goldmane--666569f655--dh7wf-eth0" Nov 1 00:38:44.715272 containerd[1648]: time="2025-11-01T00:38:44.714462139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-26pbk,Uid:4519991b-c9e6-4c9d-9f5b-daa009fd2509,Namespace:kube-system,Attempt:0,} returns sandbox id \"55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f\"" Nov 1 00:38:44.728241 containerd[1648]: time="2025-11-01T00:38:44.727276874Z" level=info msg="CreateContainer within sandbox \"55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:38:44.729240 containerd[1648]: time="2025-11-01T00:38:44.728216146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:44.761264 containerd[1648]: time="2025-11-01T00:38:44.760530742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:38:44.769889 kubelet[2931]: E1101 00:38:44.769653 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:38:44.769889 kubelet[2931]: E1101 00:38:44.769772 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:38:44.773541 kubelet[2931]: E1101 00:38:44.772870 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:44.774583 kubelet[2931]: E1101 00:38:44.774523 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:44.776320 containerd[1648]: time="2025-11-01T00:38:44.776239759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:38:44.813977 systemd[1]: Started cri-containerd-3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94.scope - libcontainer container 3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94. Nov 1 00:38:44.821516 containerd[1648]: time="2025-11-01T00:38:44.821215350Z" level=info msg="Container 59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:38:44.838045 containerd[1648]: time="2025-11-01T00:38:44.837881016Z" level=info msg="connecting to shim 5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6" address="unix:///run/containerd/s/8cfd2ac3a3f425b3a1e639aaad459c206da5eee178eb1614af7e8744f2bf8bcb" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:44.840353 containerd[1648]: time="2025-11-01T00:38:44.840322023Z" level=info msg="CreateContainer within sandbox \"55dc5bfbe124a4bf03cabbad0591f88411d78cecceb09b5d44c74d14e3485d2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6\"" Nov 1 00:38:44.842150 containerd[1648]: time="2025-11-01T00:38:44.842112028Z" level=info msg="StartContainer for \"59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6\"" Nov 1 00:38:44.843245 containerd[1648]: time="2025-11-01T00:38:44.843207683Z" level=info msg="connecting to shim 59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6" address="unix:///run/containerd/s/e0aa33b3a0c28595c6bbd63925e4664484c3683c78a00da63db64d69b7adb9ea" protocol=ttrpc version=3 Nov 1 00:38:44.884757 systemd-networkd[1568]: vxlan.calico: Gained IPv6LL Nov 1 00:38:44.898799 systemd[1]: Started cri-containerd-59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6.scope - libcontainer container 59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6. Nov 1 00:38:44.911873 systemd[1]: Started cri-containerd-5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6.scope - libcontainer container 5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6. Nov 1 00:38:45.011141 containerd[1648]: time="2025-11-01T00:38:45.010914020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cxprb,Uid:7db29291-fdaa-41bb-9242-99edb09b98bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94\"" Nov 1 00:38:45.048468 containerd[1648]: time="2025-11-01T00:38:45.047675913Z" level=info msg="CreateContainer within sandbox \"3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:38:45.079280 containerd[1648]: time="2025-11-01T00:38:45.078991050Z" level=info msg="Container 213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:38:45.089277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912336311.mount: Deactivated successfully. Nov 1 00:38:45.101229 containerd[1648]: time="2025-11-01T00:38:45.101178603Z" level=info msg="StartContainer for \"59121c0d48448081218357896ae151f840ebb246c3ab4bf629167d2ff8888ec6\" returns successfully" Nov 1 00:38:45.102541 containerd[1648]: time="2025-11-01T00:38:45.102467235Z" level=info msg="CreateContainer within sandbox \"3ccb2891ee9c5f2487fc9259f1f3b579da809090fe639ade982688170017ce94\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d\"" Nov 1 00:38:45.104758 containerd[1648]: time="2025-11-01T00:38:45.104619242Z" level=info msg="StartContainer for \"213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d\"" Nov 1 00:38:45.108131 containerd[1648]: time="2025-11-01T00:38:45.108003566Z" level=info msg="connecting to shim 213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d" address="unix:///run/containerd/s/0b97162f1e6274872d689030d3d10f6f124d57324614ba487e6b7073cd2de162" protocol=ttrpc version=3 Nov 1 00:38:45.154155 systemd[1]: Started cri-containerd-213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d.scope - libcontainer container 213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d. Nov 1 00:38:45.279143 containerd[1648]: time="2025-11-01T00:38:45.279026168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dh7wf,Uid:882832a0-46d3-43b4-82bb-ea5df649d892,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a98cd375b3f04e5b1f9f6e6d716a29474ee4ae5f1f050a113afe951a5b970f6\"" Nov 1 00:38:45.283761 containerd[1648]: time="2025-11-01T00:38:45.283725230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:38:45.296514 containerd[1648]: time="2025-11-01T00:38:45.296441993Z" level=info msg="StartContainer for \"213b116810824e057a0a76ba4d270ef53e92d3184f05b897505e634b91e5189d\" returns successfully" Nov 1 00:38:45.457186 kubelet[2931]: E1101 00:38:45.457022 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:45.458350 kubelet[2931]: E1101 00:38:45.457392 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:38:45.461677 systemd-networkd[1568]: cali9d23e44abbb: Gained IPv6LL Nov 1 00:38:45.582629 kubelet[2931]: I1101 00:38:45.578834 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cxprb" podStartSLOduration=49.574043572 podStartE2EDuration="49.574043572s" podCreationTimestamp="2025-11-01 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:45.541008653 +0000 UTC m=+53.925643074" watchObservedRunningTime="2025-11-01 00:38:45.574043572 +0000 UTC m=+53.958677986" Nov 1 00:38:45.620220 containerd[1648]: time="2025-11-01T00:38:45.620164658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:45.623957 containerd[1648]: time="2025-11-01T00:38:45.623802387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:38:45.623957 containerd[1648]: time="2025-11-01T00:38:45.623919482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:38:45.624344 kubelet[2931]: E1101 00:38:45.624251 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:38:45.624344 kubelet[2931]: E1101 00:38:45.624332 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:38:45.624745 kubelet[2931]: E1101 00:38:45.624576 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6vvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dh7wf_calico-system(882832a0-46d3-43b4-82bb-ea5df649d892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:45.626303 kubelet[2931]: E1101 00:38:45.626228 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:38:45.632034 kubelet[2931]: I1101 00:38:45.631498 2931 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-26pbk" podStartSLOduration=49.63102153 podStartE2EDuration="49.63102153s" podCreationTimestamp="2025-11-01 00:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:38:45.626057849 +0000 UTC m=+54.010692265" watchObservedRunningTime="2025-11-01 00:38:45.63102153 +0000 UTC m=+54.015655932" Nov 1 00:38:45.844671 systemd-networkd[1568]: cali153ba0b8e19: Gained IPv6LL Nov 1 00:38:46.420649 systemd-networkd[1568]: calia4d35015506: Gained IPv6LL Nov 1 00:38:46.458977 kubelet[2931]: E1101 00:38:46.458517 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:38:50.919967 containerd[1648]: time="2025-11-01T00:38:50.919899244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-c7jpd,Uid:aae018ce-9d35-415b-9be9-2f54c95ef40f,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:38:51.094073 systemd-networkd[1568]: caliaf179d9d00d: Link UP Nov 1 00:38:51.096274 systemd-networkd[1568]: caliaf179d9d00d: Gained carrier Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:50.987 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0 calico-apiserver-798685d547- calico-apiserver aae018ce-9d35-415b-9be9-2f54c95ef40f 848 0 2025-11-01 00:38:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:798685d547 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-nthov.gb1.brightbox.com calico-apiserver-798685d547-c7jpd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaf179d9d00d [] [] }} ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:50.988 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.031 [INFO][4918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" HandleID="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.031 [INFO][4918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" HandleID="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-nthov.gb1.brightbox.com", "pod":"calico-apiserver-798685d547-c7jpd", "timestamp":"2025-11-01 00:38:51.031618971 +0000 UTC"}, Hostname:"srv-nthov.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.032 [INFO][4918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.032 [INFO][4918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.032 [INFO][4918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-nthov.gb1.brightbox.com' Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.042 [INFO][4918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.049 [INFO][4918] ipam/ipam.go 394: Looking up existing affinities for host host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.058 [INFO][4918] ipam/ipam.go 511: Trying affinity for 192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.061 [INFO][4918] ipam/ipam.go 158: Attempting to load block cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.065 [INFO][4918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.27.0/26 host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.065 [INFO][4918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.27.0/26 handle="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.069 [INFO][4918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850 Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.074 [INFO][4918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.27.0/26 handle="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.083 [INFO][4918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.27.8/26] block=192.168.27.0/26 handle="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.083 [INFO][4918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.27.8/26] handle="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" host="srv-nthov.gb1.brightbox.com" Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.083 [INFO][4918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:38:51.126149 containerd[1648]: 2025-11-01 00:38:51.083 [INFO][4918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.27.8/26] IPv6=[] ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" HandleID="k8s-pod-network.935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Workload="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.131313 containerd[1648]: 2025-11-01 00:38:51.087 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0", GenerateName:"calico-apiserver-798685d547-", Namespace:"calico-apiserver", SelfLink:"", UID:"aae018ce-9d35-415b-9be9-2f54c95ef40f", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798685d547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-798685d547-c7jpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf179d9d00d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:51.131313 containerd[1648]: 2025-11-01 00:38:51.087 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.27.8/32] ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.131313 containerd[1648]: 2025-11-01 00:38:51.087 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf179d9d00d ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.131313 containerd[1648]: 2025-11-01 00:38:51.098 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.131313 containerd[1648]: 2025-11-01 00:38:51.098 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0", GenerateName:"calico-apiserver-798685d547-", Namespace:"calico-apiserver", SelfLink:"", UID:"aae018ce-9d35-415b-9be9-2f54c95ef40f", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 38, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"798685d547", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-nthov.gb1.brightbox.com", ContainerID:"935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850", Pod:"calico-apiserver-798685d547-c7jpd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaf179d9d00d", MAC:"7e:f5:d1:61:30:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:38:51.131313 containerd[1648]: 2025-11-01 00:38:51.114 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" Namespace="calico-apiserver" Pod="calico-apiserver-798685d547-c7jpd" WorkloadEndpoint="srv--nthov.gb1.brightbox.com-k8s-calico--apiserver--798685d547--c7jpd-eth0" Nov 1 00:38:51.174830 containerd[1648]: time="2025-11-01T00:38:51.174649854Z" level=info msg="connecting to shim 935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850" address="unix:///run/containerd/s/e212ed6059cf6b076f17d407bf73344807047d2cc1c712839207382a7e026000" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:38:51.220957 systemd[1]: Started cri-containerd-935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850.scope - libcontainer container 935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850. Nov 1 00:38:51.293246 containerd[1648]: time="2025-11-01T00:38:51.293169200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-798685d547-c7jpd,Uid:aae018ce-9d35-415b-9be9-2f54c95ef40f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"935de8b85c3d423f8d47205a8325a3e58731bb136c0657ceca21515453e36850\"" Nov 1 00:38:51.296973 containerd[1648]: time="2025-11-01T00:38:51.296905075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:38:51.596395 containerd[1648]: time="2025-11-01T00:38:51.596220429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:51.597610 containerd[1648]: time="2025-11-01T00:38:51.597558404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:38:51.597720 containerd[1648]: time="2025-11-01T00:38:51.597678563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:38:51.598018 kubelet[2931]: E1101 00:38:51.597917 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:38:51.598853 kubelet[2931]: E1101 00:38:51.598040 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:38:51.598853 kubelet[2931]: E1101 00:38:51.598253 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxbrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:51.599701 kubelet[2931]: E1101 00:38:51.599653 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:38:52.489137 kubelet[2931]: E1101 00:38:52.489034 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:38:52.692714 systemd-networkd[1568]: caliaf179d9d00d: Gained IPv6LL Nov 1 00:38:55.925512 containerd[1648]: time="2025-11-01T00:38:55.925161794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:38:56.230147 containerd[1648]: time="2025-11-01T00:38:56.228521644Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:56.231348 containerd[1648]: time="2025-11-01T00:38:56.230947616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:38:56.231348 containerd[1648]: time="2025-11-01T00:38:56.231096167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:38:56.231556 kubelet[2931]: E1101 00:38:56.231354 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:38:56.231556 kubelet[2931]: E1101 00:38:56.231442 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:38:56.235059 kubelet[2931]: E1101 00:38:56.231781 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:56.235567 containerd[1648]: time="2025-11-01T00:38:56.232123208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:38:56.536251 containerd[1648]: time="2025-11-01T00:38:56.535984421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:56.537212 containerd[1648]: time="2025-11-01T00:38:56.537138014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:38:56.537308 containerd[1648]: time="2025-11-01T00:38:56.537257392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:38:56.537578 kubelet[2931]: E1101 00:38:56.537522 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:38:56.537785 kubelet[2931]: E1101 00:38:56.537593 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:38:56.538737 containerd[1648]: time="2025-11-01T00:38:56.538113826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:38:56.540371 kubelet[2931]: E1101 00:38:56.540214 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7g8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76954f9f66-p69hm_calico-system(7b986655-34d6-4a0c-a36f-8538fa8da4e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:56.543438 kubelet[2931]: E1101 00:38:56.541992 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:38:56.841028 containerd[1648]: time="2025-11-01T00:38:56.840805936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:56.842619 containerd[1648]: time="2025-11-01T00:38:56.842510507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:38:56.842619 containerd[1648]: time="2025-11-01T00:38:56.842578544Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:38:56.843099 kubelet[2931]: E1101 00:38:56.843034 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:38:56.843506 kubelet[2931]: E1101 00:38:56.843281 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:38:56.843791 kubelet[2931]: E1101 00:38:56.843611 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fbbc8cc50c84aebad3a12a3980d47ed,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:56.844663 containerd[1648]: time="2025-11-01T00:38:56.844623593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:38:57.163900 containerd[1648]: time="2025-11-01T00:38:57.163775964Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:57.166533 containerd[1648]: time="2025-11-01T00:38:57.165849976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:38:57.166533 containerd[1648]: time="2025-11-01T00:38:57.165963348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:38:57.166729 kubelet[2931]: E1101 00:38:57.166128 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:38:57.166729 kubelet[2931]: E1101 00:38:57.166195 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:38:57.167160 containerd[1648]: time="2025-11-01T00:38:57.166733097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:38:57.167673 kubelet[2931]: E1101 00:38:57.167089 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:57.169058 kubelet[2931]: E1101 00:38:57.168800 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:38:57.476985 containerd[1648]: time="2025-11-01T00:38:57.476814993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:57.478508 containerd[1648]: time="2025-11-01T00:38:57.478400362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:38:57.478588 containerd[1648]: time="2025-11-01T00:38:57.478544880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:38:57.478923 kubelet[2931]: E1101 00:38:57.478841 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:38:57.479408 kubelet[2931]: E1101 00:38:57.478943 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:38:57.479465 kubelet[2931]: E1101 00:38:57.479365 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:57.481531 containerd[1648]: time="2025-11-01T00:38:57.480883190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:38:57.481644 kubelet[2931]: E1101 00:38:57.480812 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:38:57.790614 containerd[1648]: time="2025-11-01T00:38:57.790285027Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:57.792059 containerd[1648]: time="2025-11-01T00:38:57.791933893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:38:57.792059 containerd[1648]: time="2025-11-01T00:38:57.792019004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:38:57.792687 kubelet[2931]: E1101 00:38:57.792221 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:38:57.792687 kubelet[2931]: E1101 00:38:57.792285 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:38:57.792687 kubelet[2931]: E1101 00:38:57.792469 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kst54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-5ft7v_calico-apiserver(9347775b-36ca-4333-aa6c-bfa61a2002e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:57.793859 kubelet[2931]: E1101 00:38:57.793818 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:38:58.921091 containerd[1648]: time="2025-11-01T00:38:58.920971328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:38:59.237818 containerd[1648]: time="2025-11-01T00:38:59.237625620Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:38:59.239165 containerd[1648]: time="2025-11-01T00:38:59.239122665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:38:59.239272 containerd[1648]: time="2025-11-01T00:38:59.239227085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:38:59.239609 kubelet[2931]: E1101 00:38:59.239554 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:38:59.240054 kubelet[2931]: E1101 00:38:59.239624 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:38:59.245384 kubelet[2931]: E1101 00:38:59.245286 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6vvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dh7wf_calico-system(882832a0-46d3-43b4-82bb-ea5df649d892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:38:59.246611 kubelet[2931]: E1101 00:38:59.246529 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:39:05.920670 containerd[1648]: time="2025-11-01T00:39:05.920565166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:39:06.224099 containerd[1648]: time="2025-11-01T00:39:06.223920748Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:06.225177 containerd[1648]: time="2025-11-01T00:39:06.225123413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:39:06.225422 containerd[1648]: time="2025-11-01T00:39:06.225153456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:39:06.225740 kubelet[2931]: E1101 00:39:06.225574 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:39:06.226620 kubelet[2931]: E1101 00:39:06.226212 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:39:06.226620 kubelet[2931]: E1101 00:39:06.226426 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxbrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:06.228180 kubelet[2931]: E1101 00:39:06.228140 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:39:08.921435 kubelet[2931]: E1101 00:39:08.921093 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:39:08.923536 kubelet[2931]: E1101 00:39:08.922569 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:39:10.511937 containerd[1648]: time="2025-11-01T00:39:10.511865636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\" id:\"99c392a38dc96e3da45bb14d11788f04b83eb5b1ac4cb2498474e65554eaf400\" pid:5014 exited_at:{seconds:1761957550 nanos:511426342}" Nov 1 00:39:10.922391 kubelet[2931]: E1101 00:39:10.922313 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:39:11.924751 kubelet[2931]: E1101 00:39:11.924689 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:39:12.921566 kubelet[2931]: E1101 00:39:12.921443 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:39:19.428307 systemd[1]: Started sshd@7-10.230.36.206:22-139.178.89.65:42358.service - OpenSSH per-connection server daemon (139.178.89.65:42358). Nov 1 00:39:20.423026 sshd[5038]: Accepted publickey for core from 139.178.89.65 port 42358 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:20.427911 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:20.441347 systemd-logind[1630]: New session 10 of user core. Nov 1 00:39:20.449024 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:39:21.664526 sshd[5042]: Connection closed by 139.178.89.65 port 42358 Nov 1 00:39:21.665650 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:21.676743 systemd[1]: sshd@7-10.230.36.206:22-139.178.89.65:42358.service: Deactivated successfully. Nov 1 00:39:21.683054 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:39:21.686780 systemd-logind[1630]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:39:21.689798 systemd-logind[1630]: Removed session 10. Nov 1 00:39:21.923375 kubelet[2931]: E1101 00:39:21.922790 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:39:21.924996 containerd[1648]: time="2025-11-01T00:39:21.923108045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:39:22.238343 containerd[1648]: time="2025-11-01T00:39:22.237737782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:22.240995 containerd[1648]: time="2025-11-01T00:39:22.240761633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:39:22.241713 containerd[1648]: time="2025-11-01T00:39:22.241675913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:39:22.241925 kubelet[2931]: E1101 00:39:22.241669 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:39:22.241925 kubelet[2931]: E1101 00:39:22.241721 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:39:22.242240 kubelet[2931]: E1101 00:39:22.241937 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:22.245281 containerd[1648]: time="2025-11-01T00:39:22.245129061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:39:22.547714 containerd[1648]: time="2025-11-01T00:39:22.547060402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:22.549592 containerd[1648]: time="2025-11-01T00:39:22.549545338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:39:22.550348 containerd[1648]: time="2025-11-01T00:39:22.550005451Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:39:22.552474 kubelet[2931]: E1101 00:39:22.551693 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:39:22.552474 kubelet[2931]: E1101 00:39:22.551753 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:39:22.552474 kubelet[2931]: E1101 00:39:22.551897 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:22.553311 kubelet[2931]: E1101 00:39:22.553252 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:39:22.922203 containerd[1648]: time="2025-11-01T00:39:22.922088682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:39:23.223674 containerd[1648]: time="2025-11-01T00:39:23.222927229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:23.226628 containerd[1648]: time="2025-11-01T00:39:23.225000362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:39:23.226628 containerd[1648]: time="2025-11-01T00:39:23.225138021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:39:23.226806 kubelet[2931]: E1101 00:39:23.226026 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:39:23.226806 kubelet[2931]: E1101 00:39:23.226135 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:39:23.226806 kubelet[2931]: E1101 00:39:23.226334 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fbbc8cc50c84aebad3a12a3980d47ed,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:23.230954 containerd[1648]: time="2025-11-01T00:39:23.230905476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:39:23.537565 containerd[1648]: time="2025-11-01T00:39:23.537330665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:23.538811 containerd[1648]: time="2025-11-01T00:39:23.538751248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:39:23.538911 containerd[1648]: time="2025-11-01T00:39:23.538886036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:39:23.539789 kubelet[2931]: E1101 00:39:23.539702 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:39:23.539909 kubelet[2931]: E1101 00:39:23.539814 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:39:23.540197 kubelet[2931]: E1101 00:39:23.540058 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:23.541426 kubelet[2931]: E1101 00:39:23.541379 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:39:23.925532 containerd[1648]: time="2025-11-01T00:39:23.923244171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:39:24.253958 containerd[1648]: time="2025-11-01T00:39:24.253724116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:24.259516 containerd[1648]: time="2025-11-01T00:39:24.258629470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:39:24.259516 containerd[1648]: time="2025-11-01T00:39:24.258738531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:39:24.259960 kubelet[2931]: E1101 00:39:24.259879 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:39:24.260345 kubelet[2931]: E1101 00:39:24.259958 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:39:24.260345 kubelet[2931]: E1101 00:39:24.260164 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7g8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76954f9f66-p69hm_calico-system(7b986655-34d6-4a0c-a36f-8538fa8da4e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:24.261941 kubelet[2931]: E1101 00:39:24.261885 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:39:24.921964 containerd[1648]: time="2025-11-01T00:39:24.921056560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:39:25.270222 containerd[1648]: time="2025-11-01T00:39:25.270031479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:25.272694 containerd[1648]: time="2025-11-01T00:39:25.272595229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:39:25.272926 containerd[1648]: time="2025-11-01T00:39:25.272823352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:39:25.273819 kubelet[2931]: E1101 00:39:25.273428 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:39:25.273819 kubelet[2931]: E1101 00:39:25.273533 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:39:25.273819 kubelet[2931]: E1101 00:39:25.273725 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kst54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-5ft7v_calico-apiserver(9347775b-36ca-4333-aa6c-bfa61a2002e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:25.275628 kubelet[2931]: E1101 00:39:25.275573 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:39:26.824390 systemd[1]: Started sshd@8-10.230.36.206:22-139.178.89.65:59686.service - OpenSSH per-connection server daemon (139.178.89.65:59686). Nov 1 00:39:26.920954 containerd[1648]: time="2025-11-01T00:39:26.920895043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:39:27.247212 containerd[1648]: time="2025-11-01T00:39:27.246730630Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:27.252871 containerd[1648]: time="2025-11-01T00:39:27.252796630Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:39:27.253234 containerd[1648]: time="2025-11-01T00:39:27.252856605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:39:27.254562 kubelet[2931]: E1101 00:39:27.254515 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:39:27.256813 kubelet[2931]: E1101 00:39:27.254705 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:39:27.256813 kubelet[2931]: E1101 00:39:27.254892 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6vvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dh7wf_calico-system(882832a0-46d3-43b4-82bb-ea5df649d892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:27.257226 kubelet[2931]: E1101 00:39:27.257186 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:39:27.766386 sshd[5062]: Accepted publickey for core from 139.178.89.65 port 59686 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:27.769209 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:27.779809 systemd-logind[1630]: New session 11 of user core. Nov 1 00:39:27.789353 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:39:28.547302 sshd[5067]: Connection closed by 139.178.89.65 port 59686 Nov 1 00:39:28.547816 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:28.553388 systemd-logind[1630]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:39:28.556151 systemd[1]: sshd@8-10.230.36.206:22-139.178.89.65:59686.service: Deactivated successfully. Nov 1 00:39:28.561899 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:39:28.564649 systemd-logind[1630]: Removed session 11. Nov 1 00:39:33.715204 systemd[1]: Started sshd@9-10.230.36.206:22-139.178.89.65:59698.service - OpenSSH per-connection server daemon (139.178.89.65:59698). Nov 1 00:39:33.923669 kubelet[2931]: E1101 00:39:33.923588 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:39:34.702015 sshd[5081]: Accepted publickey for core from 139.178.89.65 port 59698 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:34.706673 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:34.716597 systemd-logind[1630]: New session 12 of user core. Nov 1 00:39:34.725876 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:39:34.922146 containerd[1648]: time="2025-11-01T00:39:34.922022988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:39:35.247984 containerd[1648]: time="2025-11-01T00:39:35.247889934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:39:35.249154 containerd[1648]: time="2025-11-01T00:39:35.249103286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:39:35.249235 containerd[1648]: time="2025-11-01T00:39:35.249213140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:39:35.249682 kubelet[2931]: E1101 00:39:35.249627 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:39:35.250121 kubelet[2931]: E1101 00:39:35.249700 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:39:35.250121 kubelet[2931]: E1101 00:39:35.249860 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxbrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:39:35.251554 kubelet[2931]: E1101 00:39:35.251462 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:39:35.494119 sshd[5084]: Connection closed by 139.178.89.65 port 59698 Nov 1 00:39:35.495397 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:35.506302 systemd-logind[1630]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:39:35.510075 systemd[1]: sshd@9-10.230.36.206:22-139.178.89.65:59698.service: Deactivated successfully. Nov 1 00:39:35.517066 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:39:35.521114 systemd-logind[1630]: Removed session 12. Nov 1 00:39:35.656105 systemd[1]: Started sshd@10-10.230.36.206:22-139.178.89.65:59708.service - OpenSSH per-connection server daemon (139.178.89.65:59708). Nov 1 00:39:36.581363 sshd[5097]: Accepted publickey for core from 139.178.89.65 port 59708 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:36.584927 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:36.598469 systemd-logind[1630]: New session 13 of user core. Nov 1 00:39:36.602742 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:39:37.395144 sshd[5100]: Connection closed by 139.178.89.65 port 59708 Nov 1 00:39:37.394141 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:37.400207 systemd[1]: sshd@10-10.230.36.206:22-139.178.89.65:59708.service: Deactivated successfully. Nov 1 00:39:37.400935 systemd-logind[1630]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:39:37.406113 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:39:37.412814 systemd-logind[1630]: Removed session 13. Nov 1 00:39:37.555888 systemd[1]: Started sshd@11-10.230.36.206:22-139.178.89.65:39718.service - OpenSSH per-connection server daemon (139.178.89.65:39718). Nov 1 00:39:37.920720 kubelet[2931]: E1101 00:39:37.920644 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:39:38.484166 sshd[5110]: Accepted publickey for core from 139.178.89.65 port 39718 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:38.486894 sshd-session[5110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:38.496438 systemd-logind[1630]: New session 14 of user core. Nov 1 00:39:38.503652 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:39:38.923797 kubelet[2931]: E1101 00:39:38.923594 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:39:38.927605 kubelet[2931]: E1101 00:39:38.923653 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:39:38.927605 kubelet[2931]: E1101 00:39:38.925606 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:39:39.252994 sshd[5113]: Connection closed by 139.178.89.65 port 39718 Nov 1 00:39:39.254151 sshd-session[5110]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:39.261550 systemd-logind[1630]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:39:39.263584 systemd[1]: sshd@11-10.230.36.206:22-139.178.89.65:39718.service: Deactivated successfully. Nov 1 00:39:39.266655 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:39:39.270218 systemd-logind[1630]: Removed session 14. Nov 1 00:39:40.506510 containerd[1648]: time="2025-11-01T00:39:40.506333496Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\" id:\"bfd2c882537c5482375a2bac33fa6e04c0cc0e24b9394c025512e9cbb7aaa64b\" pid:5138 exited_at:{seconds:1761957580 nanos:505771326}" Nov 1 00:39:44.410541 systemd[1]: Started sshd@12-10.230.36.206:22-139.178.89.65:39722.service - OpenSSH per-connection server daemon (139.178.89.65:39722). Nov 1 00:39:45.331275 sshd[5156]: Accepted publickey for core from 139.178.89.65 port 39722 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:45.333931 sshd-session[5156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:45.342465 systemd-logind[1630]: New session 15 of user core. Nov 1 00:39:45.352754 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:39:46.118910 sshd[5160]: Connection closed by 139.178.89.65 port 39722 Nov 1 00:39:46.126591 sshd-session[5156]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:46.133949 systemd-logind[1630]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:39:46.134215 systemd[1]: sshd@12-10.230.36.206:22-139.178.89.65:39722.service: Deactivated successfully. Nov 1 00:39:46.141643 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:39:46.147727 systemd-logind[1630]: Removed session 15. Nov 1 00:39:46.925797 kubelet[2931]: E1101 00:39:46.925556 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:39:48.919647 kubelet[2931]: E1101 00:39:48.919582 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:39:48.921722 kubelet[2931]: E1101 00:39:48.921689 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:39:49.925969 kubelet[2931]: E1101 00:39:49.925620 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:39:49.928847 kubelet[2931]: E1101 00:39:49.928749 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:39:50.921533 kubelet[2931]: E1101 00:39:50.920592 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:39:51.279397 systemd[1]: Started sshd@13-10.230.36.206:22-139.178.89.65:48512.service - OpenSSH per-connection server daemon (139.178.89.65:48512). Nov 1 00:39:52.234570 sshd[5172]: Accepted publickey for core from 139.178.89.65 port 48512 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:52.235957 sshd-session[5172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:52.247440 systemd-logind[1630]: New session 16 of user core. Nov 1 00:39:52.255053 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:39:53.065696 sshd[5177]: Connection closed by 139.178.89.65 port 48512 Nov 1 00:39:53.066812 sshd-session[5172]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:53.073762 systemd-logind[1630]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:39:53.075256 systemd[1]: sshd@13-10.230.36.206:22-139.178.89.65:48512.service: Deactivated successfully. Nov 1 00:39:53.078869 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:39:53.082667 systemd-logind[1630]: Removed session 16. Nov 1 00:39:58.219894 systemd[1]: Started sshd@14-10.230.36.206:22-139.178.89.65:51886.service - OpenSSH per-connection server daemon (139.178.89.65:51886). Nov 1 00:39:59.149763 sshd[5192]: Accepted publickey for core from 139.178.89.65 port 51886 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:39:59.151269 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:39:59.161729 systemd-logind[1630]: New session 17 of user core. Nov 1 00:39:59.168219 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:39:59.901705 sshd[5195]: Connection closed by 139.178.89.65 port 51886 Nov 1 00:39:59.902697 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Nov 1 00:39:59.909058 systemd[1]: sshd@14-10.230.36.206:22-139.178.89.65:51886.service: Deactivated successfully. Nov 1 00:39:59.912501 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:39:59.914683 systemd-logind[1630]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:39:59.916914 systemd-logind[1630]: Removed session 17. Nov 1 00:40:00.058437 systemd[1]: Started sshd@15-10.230.36.206:22-139.178.89.65:51894.service - OpenSSH per-connection server daemon (139.178.89.65:51894). Nov 1 00:40:00.922498 kubelet[2931]: E1101 00:40:00.922374 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:40:00.978231 sshd[5207]: Accepted publickey for core from 139.178.89.65 port 51894 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:00.980393 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:00.989713 systemd-logind[1630]: New session 18 of user core. Nov 1 00:40:00.998285 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:40:01.935969 kubelet[2931]: E1101 00:40:01.935915 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:40:01.936688 kubelet[2931]: E1101 00:40:01.936353 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:40:01.938573 kubelet[2931]: E1101 00:40:01.936459 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:40:02.040694 sshd[5210]: Connection closed by 139.178.89.65 port 51894 Nov 1 00:40:02.046620 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:02.056921 systemd[1]: sshd@15-10.230.36.206:22-139.178.89.65:51894.service: Deactivated successfully. Nov 1 00:40:02.061851 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:40:02.070904 systemd-logind[1630]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:40:02.074291 systemd-logind[1630]: Removed session 18. Nov 1 00:40:02.199873 systemd[1]: Started sshd@16-10.230.36.206:22-139.178.89.65:51908.service - OpenSSH per-connection server daemon (139.178.89.65:51908). Nov 1 00:40:02.922148 kubelet[2931]: E1101 00:40:02.920863 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:40:02.922148 kubelet[2931]: E1101 00:40:02.922052 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:40:03.162885 sshd[5220]: Accepted publickey for core from 139.178.89.65 port 51908 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:03.166163 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:03.177379 systemd-logind[1630]: New session 19 of user core. Nov 1 00:40:03.187162 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:40:04.824882 sshd[5223]: Connection closed by 139.178.89.65 port 51908 Nov 1 00:40:04.826254 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:04.836239 systemd[1]: sshd@16-10.230.36.206:22-139.178.89.65:51908.service: Deactivated successfully. Nov 1 00:40:04.840713 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:40:04.842777 systemd-logind[1630]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:40:04.846873 systemd-logind[1630]: Removed session 19. Nov 1 00:40:04.985597 systemd[1]: Started sshd@17-10.230.36.206:22-139.178.89.65:51920.service - OpenSSH per-connection server daemon (139.178.89.65:51920). Nov 1 00:40:05.927828 sshd[5246]: Accepted publickey for core from 139.178.89.65 port 51920 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:05.929616 sshd-session[5246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:05.941359 systemd-logind[1630]: New session 20 of user core. Nov 1 00:40:05.948795 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:40:07.038737 sshd[5249]: Connection closed by 139.178.89.65 port 51920 Nov 1 00:40:07.038580 sshd-session[5246]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:07.050218 systemd[1]: sshd@17-10.230.36.206:22-139.178.89.65:51920.service: Deactivated successfully. Nov 1 00:40:07.057823 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:40:07.063371 systemd-logind[1630]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:40:07.066688 systemd-logind[1630]: Removed session 20. Nov 1 00:40:07.198054 systemd[1]: Started sshd@18-10.230.36.206:22-139.178.89.65:40100.service - OpenSSH per-connection server daemon (139.178.89.65:40100). Nov 1 00:40:08.133508 sshd[5259]: Accepted publickey for core from 139.178.89.65 port 40100 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:08.135075 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:08.143707 systemd-logind[1630]: New session 21 of user core. Nov 1 00:40:08.154625 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:40:08.943682 sshd[5262]: Connection closed by 139.178.89.65 port 40100 Nov 1 00:40:08.944218 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:08.949831 systemd-logind[1630]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:40:08.950811 systemd[1]: sshd@18-10.230.36.206:22-139.178.89.65:40100.service: Deactivated successfully. Nov 1 00:40:08.954961 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:40:08.959838 systemd-logind[1630]: Removed session 21. Nov 1 00:40:10.854062 containerd[1648]: time="2025-11-01T00:40:10.853948779Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e5c3d340f2f3606bfe9bf437b9226770910a2fd9b4233cf757322e2263790f27\" id:\"0495c5b656e93675143d0ca72c1ae737f92f88468b675cba015e6fad880ea3af\" pid:5286 exited_at:{seconds:1761957610 nanos:853053250}" Nov 1 00:40:12.922024 containerd[1648]: time="2025-11-01T00:40:12.921875523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:40:13.247820 containerd[1648]: time="2025-11-01T00:40:13.246295173Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:13.249289 containerd[1648]: time="2025-11-01T00:40:13.249078320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:40:13.249289 containerd[1648]: time="2025-11-01T00:40:13.249113660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:40:13.249761 kubelet[2931]: E1101 00:40:13.249678 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:40:13.250400 kubelet[2931]: E1101 00:40:13.249783 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:40:13.250400 kubelet[2931]: E1101 00:40:13.250084 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7g8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76954f9f66-p69hm_calico-system(7b986655-34d6-4a0c-a36f-8538fa8da4e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:13.252158 kubelet[2931]: E1101 00:40:13.251636 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:40:13.924861 kubelet[2931]: E1101 00:40:13.921541 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:40:13.925666 containerd[1648]: time="2025-11-01T00:40:13.925523907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:40:14.110458 systemd[1]: Started sshd@19-10.230.36.206:22-139.178.89.65:40114.service - OpenSSH per-connection server daemon (139.178.89.65:40114). Nov 1 00:40:14.231474 containerd[1648]: time="2025-11-01T00:40:14.231282233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:14.232919 containerd[1648]: time="2025-11-01T00:40:14.232819957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:40:14.233039 containerd[1648]: time="2025-11-01T00:40:14.232967023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:40:14.234032 kubelet[2931]: E1101 00:40:14.233374 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:40:14.234032 kubelet[2931]: E1101 00:40:14.233442 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:40:14.234032 kubelet[2931]: E1101 00:40:14.233628 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8fbbc8cc50c84aebad3a12a3980d47ed,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:14.237243 containerd[1648]: time="2025-11-01T00:40:14.237207061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:40:14.546779 containerd[1648]: time="2025-11-01T00:40:14.546344790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:14.548563 containerd[1648]: time="2025-11-01T00:40:14.548498684Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:40:14.549139 containerd[1648]: time="2025-11-01T00:40:14.548642735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:40:14.549289 kubelet[2931]: E1101 00:40:14.548911 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:40:14.549289 kubelet[2931]: E1101 00:40:14.549000 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:40:14.549289 kubelet[2931]: E1101 00:40:14.549199 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rfvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6645fc67d5-m78ls_calico-system(bfbb9d24-fe82-4293-8037-a6b00d156a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:14.553608 kubelet[2931]: E1101 00:40:14.550823 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:40:14.922645 containerd[1648]: time="2025-11-01T00:40:14.922539634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:40:15.245907 containerd[1648]: time="2025-11-01T00:40:15.245511633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:15.247394 containerd[1648]: time="2025-11-01T00:40:15.247022492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:40:15.247394 containerd[1648]: time="2025-11-01T00:40:15.247058091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:40:15.249544 kubelet[2931]: E1101 00:40:15.247817 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:40:15.249544 kubelet[2931]: E1101 00:40:15.247920 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:40:15.249544 kubelet[2931]: E1101 00:40:15.248130 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:15.252553 containerd[1648]: time="2025-11-01T00:40:15.251918705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:40:15.568748 containerd[1648]: time="2025-11-01T00:40:15.568057237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:15.573102 containerd[1648]: time="2025-11-01T00:40:15.572835050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:40:15.573102 containerd[1648]: time="2025-11-01T00:40:15.572847893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:40:15.573599 kubelet[2931]: E1101 00:40:15.573474 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:40:15.574694 kubelet[2931]: E1101 00:40:15.573589 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:40:15.574694 kubelet[2931]: E1101 00:40:15.573880 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4brpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rbrtf_calico-system(12234797-91a4-4e56-83d9-8fb50717e71b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:15.575387 kubelet[2931]: E1101 00:40:15.575099 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:40:15.759794 sshd[5315]: Accepted publickey for core from 139.178.89.65 port 40114 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:15.764120 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:15.777728 systemd-logind[1630]: New session 22 of user core. Nov 1 00:40:15.789294 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:40:15.925888 containerd[1648]: time="2025-11-01T00:40:15.925786444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:40:16.228425 containerd[1648]: time="2025-11-01T00:40:16.228046039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:16.229580 containerd[1648]: time="2025-11-01T00:40:16.229390655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:40:16.231698 containerd[1648]: time="2025-11-01T00:40:16.229470335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:40:16.232032 kubelet[2931]: E1101 00:40:16.231962 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:40:16.232336 kubelet[2931]: E1101 00:40:16.232202 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:40:16.233269 kubelet[2931]: E1101 00:40:16.233179 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6vvx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dh7wf_calico-system(882832a0-46d3-43b4-82bb-ea5df649d892): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:16.234722 kubelet[2931]: E1101 00:40:16.234652 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:40:16.629694 sshd[5322]: Connection closed by 139.178.89.65 port 40114 Nov 1 00:40:16.630922 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:16.640627 systemd-logind[1630]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:40:16.642529 systemd[1]: sshd@19-10.230.36.206:22-139.178.89.65:40114.service: Deactivated successfully. Nov 1 00:40:16.647188 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:40:16.652266 systemd-logind[1630]: Removed session 22. Nov 1 00:40:16.923321 containerd[1648]: time="2025-11-01T00:40:16.923084468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:40:17.236329 containerd[1648]: time="2025-11-01T00:40:17.235772967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:17.238312 containerd[1648]: time="2025-11-01T00:40:17.238144449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:40:17.238312 containerd[1648]: time="2025-11-01T00:40:17.238191794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:40:17.238891 kubelet[2931]: E1101 00:40:17.238803 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:40:17.240016 kubelet[2931]: E1101 00:40:17.238917 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:40:17.240016 kubelet[2931]: E1101 00:40:17.239141 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kst54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-5ft7v_calico-apiserver(9347775b-36ca-4333-aa6c-bfa61a2002e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:17.240608 kubelet[2931]: E1101 00:40:17.240561 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:40:21.791892 systemd[1]: Started sshd@20-10.230.36.206:22-139.178.89.65:34402.service - OpenSSH per-connection server daemon (139.178.89.65:34402). Nov 1 00:40:22.742556 sshd[5341]: Accepted publickey for core from 139.178.89.65 port 34402 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:22.744202 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:22.752868 systemd-logind[1630]: New session 23 of user core. Nov 1 00:40:22.762758 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:40:23.506040 sshd[5345]: Connection closed by 139.178.89.65 port 34402 Nov 1 00:40:23.507441 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:23.519096 systemd[1]: sshd@20-10.230.36.206:22-139.178.89.65:34402.service: Deactivated successfully. Nov 1 00:40:23.525021 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:40:23.528282 systemd-logind[1630]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:40:23.531760 systemd-logind[1630]: Removed session 23. Nov 1 00:40:24.921242 kubelet[2931]: E1101 00:40:24.920702 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-76954f9f66-p69hm" podUID="7b986655-34d6-4a0c-a36f-8538fa8da4e5" Nov 1 00:40:26.921829 kubelet[2931]: E1101 00:40:26.921754 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6645fc67d5-m78ls" podUID="bfbb9d24-fe82-4293-8037-a6b00d156a26" Nov 1 00:40:28.668799 systemd[1]: Started sshd@21-10.230.36.206:22-139.178.89.65:44368.service - OpenSSH per-connection server daemon (139.178.89.65:44368). Nov 1 00:40:28.922984 kubelet[2931]: E1101 00:40:28.922032 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-5ft7v" podUID="9347775b-36ca-4333-aa6c-bfa61a2002e5" Nov 1 00:40:28.924316 containerd[1648]: time="2025-11-01T00:40:28.922702320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:40:29.252066 containerd[1648]: time="2025-11-01T00:40:29.250749250Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:40:29.253588 containerd[1648]: time="2025-11-01T00:40:29.252931188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:40:29.253588 containerd[1648]: time="2025-11-01T00:40:29.252965033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:40:29.254734 kubelet[2931]: E1101 00:40:29.253901 2931 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:40:29.255811 kubelet[2931]: E1101 00:40:29.254852 2931 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:40:29.255811 kubelet[2931]: E1101 00:40:29.255146 2931 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pxbrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-798685d547-c7jpd_calico-apiserver(aae018ce-9d35-415b-9be9-2f54c95ef40f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:40:29.256438 kubelet[2931]: E1101 00:40:29.256355 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-798685d547-c7jpd" podUID="aae018ce-9d35-415b-9be9-2f54c95ef40f" Nov 1 00:40:29.597518 sshd[5359]: Accepted publickey for core from 139.178.89.65 port 44368 ssh2: RSA SHA256:ZB93jfLpmLBtDyC4g8RKO4UtnHg9xxV0Wydb4nCt8Z8 Nov 1 00:40:29.599720 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:40:29.612076 systemd-logind[1630]: New session 24 of user core. Nov 1 00:40:29.614867 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:40:29.924451 kubelet[2931]: E1101 00:40:29.924178 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dh7wf" podUID="882832a0-46d3-43b4-82bb-ea5df649d892" Nov 1 00:40:29.927435 kubelet[2931]: E1101 00:40:29.927321 2931 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rbrtf" podUID="12234797-91a4-4e56-83d9-8fb50717e71b" Nov 1 00:40:30.342358 sshd[5362]: Connection closed by 139.178.89.65 port 44368 Nov 1 00:40:30.343357 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Nov 1 00:40:30.350104 systemd-logind[1630]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:40:30.351093 systemd[1]: sshd@21-10.230.36.206:22-139.178.89.65:44368.service: Deactivated successfully. Nov 1 00:40:30.358018 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:40:30.362846 systemd-logind[1630]: Removed session 24.