Nov 6 05:53:15.275452 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Thu Nov 6 03:32:51 -00 2025 Nov 6 05:53:15.275488 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:53:15.275503 kernel: BIOS-provided physical RAM map: Nov 6 05:53:15.275514 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 6 05:53:15.275539 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 6 05:53:15.275556 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 6 05:53:15.275568 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 6 05:53:15.275586 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 6 05:53:15.275598 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 6 05:53:15.275608 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 6 05:53:15.275619 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 6 05:53:15.275630 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 6 05:53:15.275641 kernel: NX (Execute Disable) protection: active Nov 6 05:53:15.275652 kernel: APIC: Static calls initialized Nov 6 05:53:15.275669 kernel: SMBIOS 2.8 present. Nov 6 05:53:15.275681 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 6 05:53:15.275693 kernel: DMI: Memory slots populated: 1/1 Nov 6 05:53:15.275704 kernel: Hypervisor detected: KVM Nov 6 05:53:15.275716 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 6 05:53:15.275731 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 6 05:53:15.275743 kernel: kvm-clock: using sched offset of 5036541645 cycles Nov 6 05:53:15.275756 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 6 05:53:15.275768 kernel: tsc: Detected 2500.032 MHz processor Nov 6 05:53:15.275780 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 6 05:53:15.275792 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 6 05:53:15.275804 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 6 05:53:15.275816 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 6 05:53:15.275827 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 6 05:53:15.275850 kernel: Using GB pages for direct mapping Nov 6 05:53:15.275863 kernel: ACPI: Early table checksum verification disabled Nov 6 05:53:15.275875 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 6 05:53:15.275886 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275898 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275910 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275922 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 6 05:53:15.275934 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275945 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275965 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275977 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 05:53:15.275989 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 6 05:53:15.276006 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 6 05:53:15.276019 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 6 05:53:15.276031 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 6 05:53:15.276043 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 6 05:53:15.276060 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 6 05:53:15.276072 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 6 05:53:15.276084 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 6 05:53:15.276096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 6 05:53:15.276109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 6 05:53:15.276121 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Nov 6 05:53:15.276150 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Nov 6 05:53:15.276710 kernel: Zone ranges: Nov 6 05:53:15.276749 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 6 05:53:15.276763 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 6 05:53:15.276775 kernel: Normal empty Nov 6 05:53:15.276787 kernel: Device empty Nov 6 05:53:15.276815 kernel: Movable zone start for each node Nov 6 05:53:15.276828 kernel: Early memory node ranges Nov 6 05:53:15.276840 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 6 05:53:15.276852 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 6 05:53:15.276864 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 6 05:53:15.276894 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 6 05:53:15.276906 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 6 05:53:15.276918 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 6 05:53:15.276931 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 6 05:53:15.276948 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 6 05:53:15.276961 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 6 05:53:15.276973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 6 05:53:15.276985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 6 05:53:15.276998 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 6 05:53:15.277020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 6 05:53:15.277033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 6 05:53:15.277045 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 6 05:53:15.277057 kernel: TSC deadline timer available Nov 6 05:53:15.277069 kernel: CPU topo: Max. logical packages: 16 Nov 6 05:53:15.277082 kernel: CPU topo: Max. logical dies: 16 Nov 6 05:53:15.277094 kernel: CPU topo: Max. dies per package: 1 Nov 6 05:53:15.277106 kernel: CPU topo: Max. threads per core: 1 Nov 6 05:53:15.277126 kernel: CPU topo: Num. cores per package: 1 Nov 6 05:53:15.278181 kernel: CPU topo: Num. threads per package: 1 Nov 6 05:53:15.278198 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Nov 6 05:53:15.278213 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 6 05:53:15.278225 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 6 05:53:15.278238 kernel: Booting paravirtualized kernel on KVM Nov 6 05:53:15.278250 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 6 05:53:15.278263 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 6 05:53:15.278275 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Nov 6 05:53:15.278287 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Nov 6 05:53:15.278305 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 6 05:53:15.278317 kernel: kvm-guest: PV spinlocks enabled Nov 6 05:53:15.278330 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 6 05:53:15.278343 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:53:15.278356 kernel: random: crng init done Nov 6 05:53:15.278368 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 05:53:15.278381 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 6 05:53:15.278393 kernel: Fallback order for Node 0: 0 Nov 6 05:53:15.278405 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Nov 6 05:53:15.278421 kernel: Policy zone: DMA32 Nov 6 05:53:15.278433 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 05:53:15.278445 kernel: software IO TLB: area num 16. Nov 6 05:53:15.278458 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 6 05:53:15.278470 kernel: Kernel/User page tables isolation: enabled Nov 6 05:53:15.278482 kernel: ftrace: allocating 40092 entries in 157 pages Nov 6 05:53:15.278494 kernel: ftrace: allocated 157 pages with 5 groups Nov 6 05:53:15.278507 kernel: Dynamic Preempt: voluntary Nov 6 05:53:15.278519 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 05:53:15.278549 kernel: rcu: RCU event tracing is enabled. Nov 6 05:53:15.278562 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 6 05:53:15.278574 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 05:53:15.278587 kernel: Rude variant of Tasks RCU enabled. Nov 6 05:53:15.278599 kernel: Tracing variant of Tasks RCU enabled. Nov 6 05:53:15.278611 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 05:53:15.278623 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 6 05:53:15.278636 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 6 05:53:15.278648 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 6 05:53:15.278665 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 6 05:53:15.278677 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 6 05:53:15.278690 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 05:53:15.278712 kernel: Console: colour VGA+ 80x25 Nov 6 05:53:15.278729 kernel: printk: legacy console [tty0] enabled Nov 6 05:53:15.278742 kernel: printk: legacy console [ttyS0] enabled Nov 6 05:53:15.278761 kernel: ACPI: Core revision 20240827 Nov 6 05:53:15.278774 kernel: APIC: Switch to symmetric I/O mode setup Nov 6 05:53:15.278787 kernel: x2apic enabled Nov 6 05:53:15.278799 kernel: APIC: Switched APIC routing to: physical x2apic Nov 6 05:53:15.278813 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Nov 6 05:53:15.278831 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Nov 6 05:53:15.278844 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 6 05:53:15.278857 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 6 05:53:15.278870 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 6 05:53:15.278882 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 6 05:53:15.278895 kernel: Spectre V2 : Mitigation: Retpolines Nov 6 05:53:15.278911 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 6 05:53:15.278924 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 6 05:53:15.278937 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 6 05:53:15.278950 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 6 05:53:15.278962 kernel: MDS: Mitigation: Clear CPU buffers Nov 6 05:53:15.278975 kernel: MMIO Stale Data: Unknown: No mitigations Nov 6 05:53:15.278987 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 6 05:53:15.279000 kernel: active return thunk: its_return_thunk Nov 6 05:53:15.279012 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 6 05:53:15.279026 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 6 05:53:15.279043 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 6 05:53:15.279056 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 6 05:53:15.279069 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 6 05:53:15.279081 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 6 05:53:15.279094 kernel: Freeing SMP alternatives memory: 32K Nov 6 05:53:15.279106 kernel: pid_max: default: 32768 minimum: 301 Nov 6 05:53:15.279119 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 6 05:53:15.279145 kernel: landlock: Up and running. Nov 6 05:53:15.279160 kernel: SELinux: Initializing. Nov 6 05:53:15.279173 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 05:53:15.279186 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 6 05:53:15.279199 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 6 05:53:15.279225 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 6 05:53:15.279238 kernel: signal: max sigframe size: 1776 Nov 6 05:53:15.279251 kernel: rcu: Hierarchical SRCU implementation. Nov 6 05:53:15.279264 kernel: rcu: Max phase no-delay instances is 400. Nov 6 05:53:15.279277 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Nov 6 05:53:15.279290 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 6 05:53:15.279303 kernel: smp: Bringing up secondary CPUs ... Nov 6 05:53:15.279315 kernel: smpboot: x86: Booting SMP configuration: Nov 6 05:53:15.279328 kernel: .... node #0, CPUs: #1 Nov 6 05:53:15.279353 kernel: smp: Brought up 1 node, 2 CPUs Nov 6 05:53:15.279366 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Nov 6 05:53:15.279379 kernel: Memory: 1914112K/2096616K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15356K init, 2688K bss, 176488K reserved, 0K cma-reserved) Nov 6 05:53:15.279392 kernel: devtmpfs: initialized Nov 6 05:53:15.279405 kernel: x86/mm: Memory block size: 128MB Nov 6 05:53:15.279418 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 05:53:15.279431 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 6 05:53:15.279444 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 05:53:15.279457 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 05:53:15.279480 kernel: audit: initializing netlink subsys (disabled) Nov 6 05:53:15.279494 kernel: audit: type=2000 audit(1762408391.249:1): state=initialized audit_enabled=0 res=1 Nov 6 05:53:15.279506 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 05:53:15.279519 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 6 05:53:15.279551 kernel: cpuidle: using governor menu Nov 6 05:53:15.279564 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 05:53:15.279577 kernel: dca service started, version 1.12.1 Nov 6 05:53:15.279595 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 6 05:53:15.279609 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 6 05:53:15.279627 kernel: PCI: Using configuration type 1 for base access Nov 6 05:53:15.279640 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 6 05:53:15.279653 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 05:53:15.279666 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 05:53:15.279679 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 05:53:15.279692 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 05:53:15.279704 kernel: ACPI: Added _OSI(Module Device) Nov 6 05:53:15.279717 kernel: ACPI: Added _OSI(Processor Device) Nov 6 05:53:15.279730 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 05:53:15.279747 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 05:53:15.279760 kernel: ACPI: Interpreter enabled Nov 6 05:53:15.279773 kernel: ACPI: PM: (supports S0 S5) Nov 6 05:53:15.279785 kernel: ACPI: Using IOAPIC for interrupt routing Nov 6 05:53:15.279798 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 6 05:53:15.279811 kernel: PCI: Using E820 reservations for host bridge windows Nov 6 05:53:15.279824 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 6 05:53:15.279837 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 05:53:15.280122 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 05:53:15.283239 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 6 05:53:15.283418 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 6 05:53:15.283439 kernel: PCI host bridge to bus 0000:00 Nov 6 05:53:15.283637 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 6 05:53:15.283800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 6 05:53:15.283953 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 6 05:53:15.284145 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 6 05:53:15.284315 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 6 05:53:15.284486 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 6 05:53:15.284663 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 05:53:15.284869 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 6 05:53:15.285072 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Nov 6 05:53:15.285270 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Nov 6 05:53:15.285420 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Nov 6 05:53:15.285612 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Nov 6 05:53:15.285775 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 6 05:53:15.285965 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.286226 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Nov 6 05:53:15.286401 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 6 05:53:15.286600 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 6 05:53:15.286773 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 05:53:15.286966 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.287168 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Nov 6 05:53:15.287340 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 6 05:53:15.287508 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 6 05:53:15.287704 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 05:53:15.287912 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.288100 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Nov 6 05:53:15.289314 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 6 05:53:15.289500 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 6 05:53:15.289697 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 05:53:15.289887 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.290065 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Nov 6 05:53:15.291222 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 6 05:53:15.291411 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 6 05:53:15.292438 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 05:53:15.292645 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.292826 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Nov 6 05:53:15.292990 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 6 05:53:15.293200 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 6 05:53:15.293367 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 05:53:15.293620 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.294595 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Nov 6 05:53:15.294767 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 6 05:53:15.294934 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 6 05:53:15.295097 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 05:53:15.295295 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.295461 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Nov 6 05:53:15.295675 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 6 05:53:15.295842 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 6 05:53:15.296020 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 05:53:15.296619 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Nov 6 05:53:15.296791 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Nov 6 05:53:15.296956 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 6 05:53:15.298303 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 6 05:53:15.299411 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 05:53:15.299651 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 6 05:53:15.299826 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Nov 6 05:53:15.299995 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Nov 6 05:53:15.301348 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Nov 6 05:53:15.301553 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Nov 6 05:53:15.301763 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 6 05:53:15.301940 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Nov 6 05:53:15.302111 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Nov 6 05:53:15.302302 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Nov 6 05:53:15.302486 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 6 05:53:15.302690 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 6 05:53:15.302880 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 6 05:53:15.303074 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Nov 6 05:53:15.305623 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Nov 6 05:53:15.305815 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 6 05:53:15.305987 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 6 05:53:15.306201 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Nov 6 05:53:15.306379 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Nov 6 05:53:15.306586 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 6 05:53:15.306763 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 6 05:53:15.306933 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 6 05:53:15.307120 kernel: pci_bus 0000:02: extended config space not accessible Nov 6 05:53:15.308368 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Nov 6 05:53:15.308570 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Nov 6 05:53:15.308745 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 6 05:53:15.308950 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Nov 6 05:53:15.309125 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Nov 6 05:53:15.310346 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 6 05:53:15.310554 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Nov 6 05:53:15.310733 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Nov 6 05:53:15.310902 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 6 05:53:15.311075 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 6 05:53:15.311288 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 6 05:53:15.311457 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 6 05:53:15.311640 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 6 05:53:15.311807 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 6 05:53:15.311827 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 6 05:53:15.311841 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 6 05:53:15.311854 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 6 05:53:15.311884 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 6 05:53:15.311898 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 6 05:53:15.311916 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 6 05:53:15.311930 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 6 05:53:15.311943 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 6 05:53:15.311956 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 6 05:53:15.311969 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 6 05:53:15.311982 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 6 05:53:15.311995 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 6 05:53:15.312020 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 6 05:53:15.312033 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 6 05:53:15.312046 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 6 05:53:15.312059 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 6 05:53:15.312072 kernel: iommu: Default domain type: Translated Nov 6 05:53:15.312085 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 6 05:53:15.312098 kernel: PCI: Using ACPI for IRQ routing Nov 6 05:53:15.312111 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 6 05:53:15.312124 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 6 05:53:15.314175 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 6 05:53:15.314377 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 6 05:53:15.314565 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 6 05:53:15.314735 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 6 05:53:15.314755 kernel: vgaarb: loaded Nov 6 05:53:15.314770 kernel: clocksource: Switched to clocksource kvm-clock Nov 6 05:53:15.314783 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 05:53:15.314796 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 05:53:15.314826 kernel: pnp: PnP ACPI init Nov 6 05:53:15.315027 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 6 05:53:15.315054 kernel: pnp: PnP ACPI: found 5 devices Nov 6 05:53:15.315067 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 6 05:53:15.315080 kernel: NET: Registered PF_INET protocol family Nov 6 05:53:15.315093 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 05:53:15.315106 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 6 05:53:15.315119 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 05:53:15.315165 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 6 05:53:15.315180 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 6 05:53:15.315200 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 6 05:53:15.315214 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 05:53:15.315227 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 6 05:53:15.315241 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 05:53:15.315254 kernel: NET: Registered PF_XDP protocol family Nov 6 05:53:15.315438 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 6 05:53:15.315627 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 6 05:53:15.315812 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 6 05:53:15.315979 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 6 05:53:15.316164 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 6 05:53:15.316332 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 6 05:53:15.316498 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 6 05:53:15.316676 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 6 05:53:15.316841 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Nov 6 05:53:15.317006 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Nov 6 05:53:15.318153 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Nov 6 05:53:15.318337 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Nov 6 05:53:15.318508 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Nov 6 05:53:15.318698 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Nov 6 05:53:15.318865 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Nov 6 05:53:15.319686 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Nov 6 05:53:15.319869 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 6 05:53:15.320106 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 6 05:53:15.320313 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 6 05:53:15.320492 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 6 05:53:15.320673 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 6 05:53:15.320839 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 05:53:15.321005 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 6 05:53:15.321256 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 6 05:53:15.321428 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 6 05:53:15.321690 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 05:53:15.321860 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 6 05:53:15.322875 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 6 05:53:15.323057 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 6 05:53:15.323287 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 05:53:15.323458 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 6 05:53:15.323644 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 6 05:53:15.323812 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 6 05:53:15.323997 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 05:53:15.324183 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 6 05:53:15.324351 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 6 05:53:15.324517 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 6 05:53:15.324698 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 05:53:15.324883 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 6 05:53:15.325047 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 6 05:53:15.325248 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 6 05:53:15.325415 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 05:53:15.325619 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 6 05:53:15.325784 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 6 05:53:15.325953 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 6 05:53:15.326118 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 05:53:15.327355 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 6 05:53:15.327553 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 6 05:53:15.327741 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 6 05:53:15.327907 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 05:53:15.328066 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 6 05:53:15.328237 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 6 05:53:15.328388 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 6 05:53:15.328554 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 6 05:53:15.329344 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 6 05:53:15.329498 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 6 05:53:15.329709 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 6 05:53:15.329869 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 6 05:53:15.330026 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 6 05:53:15.331297 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 6 05:53:15.331479 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 6 05:53:15.331665 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 6 05:53:15.331845 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 6 05:53:15.332014 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 6 05:53:15.332192 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 6 05:53:15.332354 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 6 05:53:15.332545 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 6 05:53:15.332704 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 6 05:53:15.332860 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 6 05:53:15.333045 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 6 05:53:15.335280 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 6 05:53:15.335470 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 6 05:53:15.335669 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 6 05:53:15.335829 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 6 05:53:15.335986 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 6 05:53:15.336172 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 6 05:53:15.336353 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 6 05:53:15.336534 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 6 05:53:15.336705 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 6 05:53:15.336863 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 6 05:53:15.337018 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 6 05:53:15.337040 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 6 05:53:15.337054 kernel: PCI: CLS 0 bytes, default 64 Nov 6 05:53:15.337083 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 6 05:53:15.337098 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 6 05:53:15.337112 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 6 05:53:15.337126 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Nov 6 05:53:15.339330 kernel: Initialise system trusted keyrings Nov 6 05:53:15.339349 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 6 05:53:15.339363 kernel: Key type asymmetric registered Nov 6 05:53:15.339377 kernel: Asymmetric key parser 'x509' registered Nov 6 05:53:15.339390 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 05:53:15.339431 kernel: io scheduler mq-deadline registered Nov 6 05:53:15.339445 kernel: io scheduler kyber registered Nov 6 05:53:15.339459 kernel: io scheduler bfq registered Nov 6 05:53:15.339679 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 6 05:53:15.339854 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 6 05:53:15.340036 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.340241 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 6 05:53:15.340428 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 6 05:53:15.340635 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.340805 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 6 05:53:15.340971 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 6 05:53:15.342653 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.342851 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 6 05:53:15.343062 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 6 05:53:15.343289 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.343468 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 6 05:53:15.343650 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 6 05:53:15.343817 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.343984 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 6 05:53:15.344201 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 6 05:53:15.344377 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.344583 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 6 05:53:15.344750 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 6 05:53:15.344915 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.345082 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 6 05:53:15.345294 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 6 05:53:15.345464 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 6 05:53:15.345492 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 6 05:53:15.345507 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 6 05:53:15.345533 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 6 05:53:15.345548 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 05:53:15.345562 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 6 05:53:15.345576 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 6 05:53:15.345605 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 6 05:53:15.345619 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 6 05:53:15.345792 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 6 05:53:15.345815 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 6 05:53:15.345983 kernel: rtc_cmos 00:03: registered as rtc0 Nov 6 05:53:15.346161 kernel: rtc_cmos 00:03: setting system clock to 2025-11-06T05:53:13 UTC (1762408393) Nov 6 05:53:15.346323 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 6 05:53:15.346366 kernel: intel_pstate: CPU model not supported Nov 6 05:53:15.346381 kernel: NET: Registered PF_INET6 protocol family Nov 6 05:53:15.346395 kernel: Segment Routing with IPv6 Nov 6 05:53:15.346409 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 05:53:15.346423 kernel: NET: Registered PF_PACKET protocol family Nov 6 05:53:15.346437 kernel: Key type dns_resolver registered Nov 6 05:53:15.346450 kernel: IPI shorthand broadcast: enabled Nov 6 05:53:15.346464 kernel: sched_clock: Marking stable (2260004030, 227332645)->(2615998540, -128661865) Nov 6 05:53:15.346478 kernel: registered taskstats version 1 Nov 6 05:53:15.346492 kernel: Loading compiled-in X.509 certificates Nov 6 05:53:15.346535 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: edee08bd79f57120bcf336d97df00a0ad5e85412' Nov 6 05:53:15.346565 kernel: Demotion targets for Node 0: null Nov 6 05:53:15.346579 kernel: Key type .fscrypt registered Nov 6 05:53:15.346593 kernel: Key type fscrypt-provisioning registered Nov 6 05:53:15.346606 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 05:53:15.346620 kernel: ima: Allocated hash algorithm: sha1 Nov 6 05:53:15.346633 kernel: ima: No architecture policies found Nov 6 05:53:15.346646 kernel: clk: Disabling unused clocks Nov 6 05:53:15.346660 kernel: Freeing unused kernel image (initmem) memory: 15356K Nov 6 05:53:15.346687 kernel: Write protecting the kernel read-only data: 45056k Nov 6 05:53:15.346701 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 6 05:53:15.346714 kernel: Run /init as init process Nov 6 05:53:15.346727 kernel: with arguments: Nov 6 05:53:15.346741 kernel: /init Nov 6 05:53:15.346755 kernel: with environment: Nov 6 05:53:15.346768 kernel: HOME=/ Nov 6 05:53:15.346781 kernel: TERM=linux Nov 6 05:53:15.346794 kernel: ACPI: bus type USB registered Nov 6 05:53:15.346818 kernel: usbcore: registered new interface driver usbfs Nov 6 05:53:15.346833 kernel: usbcore: registered new interface driver hub Nov 6 05:53:15.346853 kernel: usbcore: registered new device driver usb Nov 6 05:53:15.347035 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 6 05:53:15.347246 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 6 05:53:15.347416 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 6 05:53:15.347604 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 6 05:53:15.347775 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 6 05:53:15.347950 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 6 05:53:15.348208 kernel: hub 1-0:1.0: USB hub found Nov 6 05:53:15.348403 kernel: hub 1-0:1.0: 4 ports detected Nov 6 05:53:15.348619 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 6 05:53:15.348821 kernel: hub 2-0:1.0: USB hub found Nov 6 05:53:15.349004 kernel: hub 2-0:1.0: 4 ports detected Nov 6 05:53:15.349025 kernel: SCSI subsystem initialized Nov 6 05:53:15.349055 kernel: libata version 3.00 loaded. Nov 6 05:53:15.349244 kernel: ahci 0000:00:1f.2: version 3.0 Nov 6 05:53:15.349268 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 6 05:53:15.349426 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 6 05:53:15.349629 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 6 05:53:15.349829 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 6 05:53:15.350020 kernel: scsi host0: ahci Nov 6 05:53:15.350243 kernel: scsi host1: ahci Nov 6 05:53:15.350423 kernel: scsi host2: ahci Nov 6 05:53:15.350622 kernel: scsi host3: ahci Nov 6 05:53:15.350806 kernel: scsi host4: ahci Nov 6 05:53:15.350997 kernel: scsi host5: ahci Nov 6 05:53:15.351018 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Nov 6 05:53:15.351049 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Nov 6 05:53:15.351063 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Nov 6 05:53:15.351077 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Nov 6 05:53:15.351098 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Nov 6 05:53:15.351111 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Nov 6 05:53:15.351343 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 6 05:53:15.351367 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 05:53:15.351381 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 6 05:53:15.351418 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 6 05:53:15.351432 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 6 05:53:15.351446 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 6 05:53:15.351468 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 6 05:53:15.351481 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 6 05:53:15.351495 kernel: usbcore: registered new interface driver usbhid Nov 6 05:53:15.351696 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 6 05:53:15.351718 kernel: usbhid: USB HID core driver Nov 6 05:53:15.351895 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 6 05:53:15.351916 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 05:53:15.351930 kernel: GPT:25804799 != 125829119 Nov 6 05:53:15.351943 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 05:53:15.351957 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 6 05:53:15.351971 kernel: GPT:25804799 != 125829119 Nov 6 05:53:15.352239 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 6 05:53:15.352276 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 05:53:15.352291 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 05:53:15.352304 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 05:53:15.352318 kernel: device-mapper: uevent: version 1.0.3 Nov 6 05:53:15.352332 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 6 05:53:15.352345 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 6 05:53:15.352359 kernel: raid6: sse2x4 gen() 12977 MB/s Nov 6 05:53:15.352372 kernel: raid6: sse2x2 gen() 8948 MB/s Nov 6 05:53:15.352386 kernel: raid6: sse2x1 gen() 9558 MB/s Nov 6 05:53:15.352411 kernel: raid6: using algorithm sse2x4 gen() 12977 MB/s Nov 6 05:53:15.352425 kernel: raid6: .... xor() 7149 MB/s, rmw enabled Nov 6 05:53:15.352438 kernel: raid6: using ssse3x2 recovery algorithm Nov 6 05:53:15.352452 kernel: xor: automatically using best checksumming function avx Nov 6 05:53:15.352466 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 05:53:15.352479 kernel: BTRFS: device fsid b5cf1d69-dae6-4f65-bb6f-44a747495a60 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (193) Nov 6 05:53:15.352493 kernel: BTRFS info (device dm-0): first mount of filesystem b5cf1d69-dae6-4f65-bb6f-44a747495a60 Nov 6 05:53:15.352507 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:53:15.352533 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 05:53:15.352559 kernel: BTRFS info (device dm-0): enabling free space tree Nov 6 05:53:15.352573 kernel: loop: module loaded Nov 6 05:53:15.352587 kernel: loop0: detected capacity change from 0 to 101000 Nov 6 05:53:15.352600 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 05:53:15.352616 systemd[1]: Successfully made /usr/ read-only. Nov 6 05:53:15.352633 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 05:53:15.352648 systemd[1]: Detected virtualization kvm. Nov 6 05:53:15.352673 systemd[1]: Detected architecture x86-64. Nov 6 05:53:15.352687 systemd[1]: Running in initrd. Nov 6 05:53:15.352702 systemd[1]: No hostname configured, using default hostname. Nov 6 05:53:15.352716 systemd[1]: Hostname set to . Nov 6 05:53:15.352730 systemd[1]: Initializing machine ID from VM UUID. Nov 6 05:53:15.352744 systemd[1]: Queued start job for default target initrd.target. Nov 6 05:53:15.352758 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 05:53:15.352773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:53:15.352787 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:53:15.352812 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 05:53:15.352827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 05:53:15.352842 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 05:53:15.352857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 05:53:15.352872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:53:15.352886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:53:15.352910 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 6 05:53:15.352925 systemd[1]: Reached target paths.target - Path Units. Nov 6 05:53:15.352939 systemd[1]: Reached target slices.target - Slice Units. Nov 6 05:53:15.352953 systemd[1]: Reached target swap.target - Swaps. Nov 6 05:53:15.352968 systemd[1]: Reached target timers.target - Timer Units. Nov 6 05:53:15.352982 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 05:53:15.352996 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 05:53:15.353010 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 05:53:15.353025 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 05:53:15.353050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:53:15.353064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 05:53:15.353079 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:53:15.353093 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 05:53:15.353108 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 05:53:15.353122 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 05:53:15.353153 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 05:53:15.353169 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 05:53:15.353196 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 6 05:53:15.353212 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 05:53:15.353226 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 05:53:15.353240 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 05:53:15.353254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:53:15.353269 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 05:53:15.353295 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:53:15.353309 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 05:53:15.353324 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 05:53:15.353387 systemd-journald[328]: Collecting audit messages is disabled. Nov 6 05:53:15.353434 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 05:53:15.353448 kernel: Bridge firewalling registered Nov 6 05:53:15.353463 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 05:53:15.353477 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 05:53:15.353492 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 05:53:15.353507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 05:53:15.353534 systemd-journald[328]: Journal started Nov 6 05:53:15.353573 systemd-journald[328]: Runtime Journal (/run/log/journal/c28646dbafcd406dabbb82b8c9f433e2) is 4.7M, max 37.8M, 33M free. Nov 6 05:53:15.296420 systemd-modules-load[332]: Inserted module 'br_netfilter' Nov 6 05:53:15.403171 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 05:53:15.409843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:53:15.411651 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:53:15.419330 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 05:53:15.423304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 05:53:15.424395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:53:15.446359 systemd-tmpfiles[354]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 6 05:53:15.456824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:53:15.458126 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 05:53:15.461105 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 05:53:15.465313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 05:53:15.503929 dracut-cmdline[367]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=42c7eeb79a8ee89597bba4204806137326be9acdbca65a8fd923766f65b62f69 Nov 6 05:53:15.539846 systemd-resolved[368]: Positive Trust Anchors: Nov 6 05:53:15.539878 systemd-resolved[368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 05:53:15.539923 systemd-resolved[368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 05:53:15.569608 systemd-resolved[368]: Defaulting to hostname 'linux'. Nov 6 05:53:15.572465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 05:53:15.573719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:53:15.638221 kernel: Loading iSCSI transport class v2.0-870. Nov 6 05:53:15.656226 kernel: iscsi: registered transport (tcp) Nov 6 05:53:15.685503 kernel: iscsi: registered transport (qla4xxx) Nov 6 05:53:15.685596 kernel: QLogic iSCSI HBA Driver Nov 6 05:53:15.721790 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 05:53:15.740659 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:53:15.744489 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 05:53:15.813538 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 05:53:15.816501 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 05:53:15.819323 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 05:53:15.863461 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 05:53:15.868027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:53:15.909456 systemd-udevd[606]: Using default interface naming scheme 'v255'. Nov 6 05:53:15.926067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:53:15.930906 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 05:53:15.972260 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Nov 6 05:53:15.973095 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 05:53:15.978337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 05:53:16.013026 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 05:53:16.018332 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 05:53:16.047750 systemd-networkd[717]: lo: Link UP Nov 6 05:53:16.048770 systemd-networkd[717]: lo: Gained carrier Nov 6 05:53:16.049515 systemd-networkd[717]: Enumeration completed Nov 6 05:53:16.049668 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 05:53:16.050579 systemd[1]: Reached target network.target - Network. Nov 6 05:53:16.173092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:53:16.180398 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 05:53:16.327572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 05:53:16.376056 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 05:53:16.391057 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 05:53:16.408119 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 05:53:16.413480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 05:53:16.437689 kernel: cryptd: max_cpu_qlen set to 1000 Nov 6 05:53:16.440793 disk-uuid[773]: Primary Header is updated. Nov 6 05:53:16.440793 disk-uuid[773]: Secondary Entries is updated. Nov 6 05:53:16.440793 disk-uuid[773]: Secondary Header is updated. Nov 6 05:53:16.480168 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 6 05:53:16.486161 kernel: AES CTR mode by8 optimization enabled Nov 6 05:53:16.491810 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 05:53:16.493068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:53:16.495086 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:53:16.502522 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:53:16.505994 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:53:16.611648 systemd-networkd[717]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:53:16.611662 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 05:53:16.614451 systemd-networkd[717]: eth0: Link UP Nov 6 05:53:16.614855 systemd-networkd[717]: eth0: Gained carrier Nov 6 05:53:16.614870 systemd-networkd[717]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:53:16.625199 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 05:53:16.644259 systemd-networkd[717]: eth0: DHCPv4 address 10.230.27.98/30, gateway 10.230.27.97 acquired from 10.230.27.97 Nov 6 05:53:16.656219 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:53:16.659391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 05:53:16.660917 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:53:16.662496 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 05:53:16.665494 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 05:53:16.694167 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 05:53:17.539482 disk-uuid[775]: Warning: The kernel is still using the old partition table. Nov 6 05:53:17.539482 disk-uuid[775]: The new table will be used at the next reboot or after you Nov 6 05:53:17.539482 disk-uuid[775]: run partprobe(8) or kpartx(8) Nov 6 05:53:17.539482 disk-uuid[775]: The operation has completed successfully. Nov 6 05:53:17.546628 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 05:53:17.546827 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 05:53:17.549663 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 05:53:17.590185 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Nov 6 05:53:17.595524 kernel: BTRFS info (device vda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:53:17.595573 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:53:17.600565 kernel: BTRFS info (device vda6): turning on async discard Nov 6 05:53:17.600621 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 05:53:17.611308 kernel: BTRFS info (device vda6): last unmount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:53:17.612011 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 05:53:17.614543 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 05:53:17.855954 ignition[878]: Ignition 2.22.0 Nov 6 05:53:17.857164 ignition[878]: Stage: fetch-offline Nov 6 05:53:17.857265 ignition[878]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:17.857288 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:17.858025 ignition[878]: parsed url from cmdline: "" Nov 6 05:53:17.858044 ignition[878]: no config URL provided Nov 6 05:53:17.858060 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 05:53:17.861582 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 05:53:17.858079 ignition[878]: no config at "/usr/lib/ignition/user.ign" Nov 6 05:53:17.858094 ignition[878]: failed to fetch config: resource requires networking Nov 6 05:53:17.859777 ignition[878]: Ignition finished successfully Nov 6 05:53:17.865323 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 6 05:53:17.903907 ignition[885]: Ignition 2.22.0 Nov 6 05:53:17.903931 ignition[885]: Stage: fetch Nov 6 05:53:17.904139 ignition[885]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:17.904171 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:17.904289 ignition[885]: parsed url from cmdline: "" Nov 6 05:53:17.904296 ignition[885]: no config URL provided Nov 6 05:53:17.904306 ignition[885]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 05:53:17.904319 ignition[885]: no config at "/usr/lib/ignition/user.ign" Nov 6 05:53:17.904504 ignition[885]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 6 05:53:17.904553 ignition[885]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 6 05:53:17.904652 ignition[885]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 6 05:53:17.921459 ignition[885]: GET result: OK Nov 6 05:53:17.922308 ignition[885]: parsing config with SHA512: 9af3bf48ac865b474009607f5a935d56e83c88cde12085c7dd5ec2dacbcd47c40a6e6ef8d5e5ab05d232099b8c9bbb3d4187e44dd77bb3f8115b3f083a59112f Nov 6 05:53:17.927710 unknown[885]: fetched base config from "system" Nov 6 05:53:17.927768 unknown[885]: fetched base config from "system" Nov 6 05:53:17.928312 ignition[885]: fetch: fetch complete Nov 6 05:53:17.927780 unknown[885]: fetched user config from "openstack" Nov 6 05:53:17.928322 ignition[885]: fetch: fetch passed Nov 6 05:53:17.931367 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 6 05:53:17.928395 ignition[885]: Ignition finished successfully Nov 6 05:53:17.940216 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 05:53:17.982222 ignition[891]: Ignition 2.22.0 Nov 6 05:53:17.982247 ignition[891]: Stage: kargs Nov 6 05:53:17.982479 ignition[891]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:17.982497 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:17.983549 ignition[891]: kargs: kargs passed Nov 6 05:53:17.983623 ignition[891]: Ignition finished successfully Nov 6 05:53:17.986646 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 05:53:17.989356 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 05:53:18.024895 ignition[898]: Ignition 2.22.0 Nov 6 05:53:18.026015 ignition[898]: Stage: disks Nov 6 05:53:18.027481 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:18.027499 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:18.030125 ignition[898]: disks: disks passed Nov 6 05:53:18.030850 ignition[898]: Ignition finished successfully Nov 6 05:53:18.032349 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 05:53:18.033755 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 05:53:18.034765 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 05:53:18.036483 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 05:53:18.038060 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 05:53:18.039594 systemd[1]: Reached target basic.target - Basic System. Nov 6 05:53:18.042452 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 05:53:18.089495 systemd-fsck[906]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 6 05:53:18.092983 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 05:53:18.095845 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 05:53:18.133681 systemd-networkd[717]: eth0: Gained IPv6LL Nov 6 05:53:18.243293 kernel: EXT4-fs (vda9): mounted filesystem 05065f18-b1e1-4b9e-83f5-1a1189e0d083 r/w with ordered data mode. Quota mode: none. Nov 6 05:53:18.242710 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 05:53:18.244933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 05:53:18.248505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 05:53:18.252220 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 05:53:18.255390 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 05:53:18.260294 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 6 05:53:18.263233 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 05:53:18.264444 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 05:53:18.270088 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 05:53:18.275894 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 05:53:18.294215 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Nov 6 05:53:18.294253 kernel: BTRFS info (device vda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:53:18.294274 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:53:18.294294 kernel: BTRFS info (device vda6): turning on async discard Nov 6 05:53:18.294312 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 05:53:18.301201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 05:53:18.371161 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:18.389624 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 05:53:18.398466 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Nov 6 05:53:18.407458 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 05:53:18.414065 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 05:53:18.532678 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 05:53:18.535350 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 05:53:18.538334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 05:53:18.565377 kernel: BTRFS info (device vda6): last unmount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:53:18.576037 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 05:53:18.591346 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 05:53:18.607092 ignition[1032]: INFO : Ignition 2.22.0 Nov 6 05:53:18.607092 ignition[1032]: INFO : Stage: mount Nov 6 05:53:18.608989 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:18.608989 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:18.608989 ignition[1032]: INFO : mount: mount passed Nov 6 05:53:18.612217 ignition[1032]: INFO : Ignition finished successfully Nov 6 05:53:18.610527 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 05:53:19.404242 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:19.641424 systemd-networkd[717]: eth0: Ignoring DHCPv6 address 2a02:1348:179:86d8:24:19ff:fee6:1b62/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:86d8:24:19ff:fee6:1b62/64 assigned by NDisc. Nov 6 05:53:19.641440 systemd-networkd[717]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 6 05:53:21.419510 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:25.436183 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:25.443639 coreos-metadata[916]: Nov 06 05:53:25.443 WARN failed to locate config-drive, using the metadata service API instead Nov 6 05:53:25.471247 coreos-metadata[916]: Nov 06 05:53:25.471 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 6 05:53:25.487444 coreos-metadata[916]: Nov 06 05:53:25.487 INFO Fetch successful Nov 6 05:53:25.488614 coreos-metadata[916]: Nov 06 05:53:25.488 INFO wrote hostname srv-dhf6q.gb1.brightbox.com to /sysroot/etc/hostname Nov 6 05:53:25.492411 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 6 05:53:25.492673 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 6 05:53:25.497388 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 05:53:25.519329 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 05:53:25.551154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1048) Nov 6 05:53:25.551225 kernel: BTRFS info (device vda6): first mount of filesystem 8a1691a9-0f9b-492f-9a94-8ffa2a579e5c Nov 6 05:53:25.552283 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 6 05:53:25.560027 kernel: BTRFS info (device vda6): turning on async discard Nov 6 05:53:25.560077 kernel: BTRFS info (device vda6): enabling free space tree Nov 6 05:53:25.563370 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 05:53:25.608784 ignition[1066]: INFO : Ignition 2.22.0 Nov 6 05:53:25.608784 ignition[1066]: INFO : Stage: files Nov 6 05:53:25.608784 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:25.608784 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:25.613535 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Nov 6 05:53:25.613535 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 05:53:25.613535 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 05:53:25.617329 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 05:53:25.618398 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 05:53:25.619883 unknown[1066]: wrote ssh authorized keys file for user: core Nov 6 05:53:25.620961 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 05:53:25.622629 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 05:53:25.623942 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 6 05:53:25.852396 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 05:53:26.131874 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 6 05:53:26.131874 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:53:26.140369 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 6 05:53:26.583426 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 6 05:53:28.640909 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 6 05:53:28.644775 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 6 05:53:28.644775 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 05:53:28.648192 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 05:53:28.649546 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 6 05:53:28.649546 ignition[1066]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 6 05:53:28.649546 ignition[1066]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 05:53:28.652628 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 05:53:28.652628 ignition[1066]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 05:53:28.652628 ignition[1066]: INFO : files: files passed Nov 6 05:53:28.652628 ignition[1066]: INFO : Ignition finished successfully Nov 6 05:53:28.652163 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 05:53:28.657904 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 05:53:28.663331 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 05:53:28.683528 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 05:53:28.684547 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 05:53:28.695342 initrd-setup-root-after-ignition[1097]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:53:28.695342 initrd-setup-root-after-ignition[1097]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:53:28.698001 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 05:53:28.698650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 05:53:28.700521 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 05:53:28.702900 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 05:53:28.770058 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 05:53:28.770312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 05:53:28.772740 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 05:53:28.773865 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 05:53:28.775738 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 05:53:28.778329 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 05:53:28.812104 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 05:53:28.816308 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 05:53:28.839914 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 6 05:53:28.841332 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:53:28.842252 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:53:28.844840 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 05:53:28.845933 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 05:53:28.846209 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 05:53:28.848530 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 05:53:28.849414 systemd[1]: Stopped target basic.target - Basic System. Nov 6 05:53:28.850991 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 05:53:28.852412 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 05:53:28.853801 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 05:53:28.855587 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 6 05:53:28.857179 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 05:53:28.859037 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 05:53:28.860696 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 05:53:28.862183 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 05:53:28.863808 systemd[1]: Stopped target swap.target - Swaps. Nov 6 05:53:28.865260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 05:53:28.865466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 05:53:28.867207 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:53:28.868323 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:53:28.869633 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 05:53:28.869839 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:53:28.871436 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 05:53:28.871708 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 05:53:28.873652 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 05:53:28.873898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 05:53:28.875580 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 05:53:28.875811 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 05:53:28.879410 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 05:53:28.880191 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 05:53:28.881296 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:53:28.885357 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 05:53:28.886217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 05:53:28.887305 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:53:28.895692 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 05:53:28.895899 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 05:53:28.903525 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 05:53:28.903671 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 05:53:28.930563 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 05:53:28.948768 ignition[1121]: INFO : Ignition 2.22.0 Nov 6 05:53:28.951242 ignition[1121]: INFO : Stage: umount Nov 6 05:53:28.951242 ignition[1121]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 05:53:28.951242 ignition[1121]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 6 05:53:28.953693 ignition[1121]: INFO : umount: umount passed Nov 6 05:53:28.953693 ignition[1121]: INFO : Ignition finished successfully Nov 6 05:53:28.955468 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 05:53:28.955697 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 05:53:28.957452 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 05:53:28.957556 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 05:53:28.958493 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 05:53:28.958579 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 05:53:28.959883 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 6 05:53:28.959958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 6 05:53:28.961378 systemd[1]: Stopped target network.target - Network. Nov 6 05:53:28.962654 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 05:53:28.962733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 05:53:28.964201 systemd[1]: Stopped target paths.target - Path Units. Nov 6 05:53:28.965467 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 05:53:28.969258 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:53:28.970298 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 05:53:28.971845 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 05:53:28.973638 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 05:53:28.973720 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 05:53:28.974931 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 05:53:28.975001 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 05:53:28.976300 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 05:53:28.976397 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 05:53:28.977695 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 05:53:28.977773 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 05:53:28.979535 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 05:53:28.981550 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 05:53:28.984342 systemd-networkd[717]: eth0: DHCPv6 lease lost Nov 6 05:53:28.991005 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 05:53:28.991257 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 05:53:28.994813 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 05:53:28.995281 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 05:53:28.995459 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 05:53:29.000238 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 05:53:29.001004 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 6 05:53:29.002187 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 05:53:29.002269 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:53:29.004802 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 05:53:29.006796 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 05:53:29.006909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 05:53:29.009615 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 05:53:29.009702 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:53:29.012235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 05:53:29.012306 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 05:53:29.014795 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 05:53:29.014899 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:53:29.017545 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:53:29.025675 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 05:53:29.025797 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:53:29.034874 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 05:53:29.035163 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:53:29.037355 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 05:53:29.037542 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 05:53:29.039174 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 05:53:29.039244 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:53:29.040717 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 05:53:29.040798 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 05:53:29.043034 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 05:53:29.043172 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 05:53:29.044625 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 05:53:29.044714 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 05:53:29.047741 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 05:53:29.050462 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 6 05:53:29.050565 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:53:29.053670 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 05:53:29.053765 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:53:29.056514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 05:53:29.056607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:53:29.060295 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Nov 6 05:53:29.060383 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 05:53:29.060469 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 05:53:29.075847 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 05:53:29.077193 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 05:53:29.080474 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 05:53:29.080687 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 05:53:29.082679 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 05:53:29.082780 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 05:53:29.109715 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 05:53:29.110079 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 05:53:29.112701 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 05:53:29.115232 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 05:53:29.138163 systemd[1]: Switching root. Nov 6 05:53:29.182808 systemd-journald[328]: Journal stopped Nov 6 05:53:30.683657 systemd-journald[328]: Received SIGTERM from PID 1 (systemd). Nov 6 05:53:30.683774 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 05:53:30.683813 kernel: SELinux: policy capability open_perms=1 Nov 6 05:53:30.683833 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 05:53:30.683852 kernel: SELinux: policy capability always_check_network=0 Nov 6 05:53:30.683879 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 05:53:30.683916 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 05:53:30.683937 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 05:53:30.683956 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 05:53:30.683975 kernel: SELinux: policy capability userspace_initial_context=0 Nov 6 05:53:30.684001 kernel: audit: type=1403 audit(1762408409.430:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 05:53:30.684040 systemd[1]: Successfully loaded SELinux policy in 78.762ms. Nov 6 05:53:30.684076 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.761ms. Nov 6 05:53:30.684099 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 05:53:30.684128 systemd[1]: Detected virtualization kvm. Nov 6 05:53:30.685225 systemd[1]: Detected architecture x86-64. Nov 6 05:53:30.685252 systemd[1]: Detected first boot. Nov 6 05:53:30.685274 systemd[1]: Hostname set to . Nov 6 05:53:30.685302 systemd[1]: Initializing machine ID from VM UUID. Nov 6 05:53:30.685323 zram_generator::config[1165]: No configuration found. Nov 6 05:53:30.685345 kernel: Guest personality initialized and is inactive Nov 6 05:53:30.685376 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 6 05:53:30.685394 kernel: Initialized host personality Nov 6 05:53:30.685437 kernel: NET: Registered PF_VSOCK protocol family Nov 6 05:53:30.685459 systemd[1]: Populated /etc with preset unit settings. Nov 6 05:53:30.685482 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 05:53:30.685502 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 05:53:30.685522 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 05:53:30.685543 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 05:53:30.685564 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 05:53:30.685586 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 05:53:30.685606 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 05:53:30.685640 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 05:53:30.685662 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 05:53:30.685683 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 05:53:30.685703 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 05:53:30.685723 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 05:53:30.685743 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 05:53:30.685763 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 05:53:30.685784 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 05:53:30.685816 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 05:53:30.685839 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 05:53:30.685860 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 05:53:30.685906 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 6 05:53:30.685929 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 05:53:30.685950 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 05:53:30.685971 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 05:53:30.685990 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 05:53:30.686010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 05:53:30.686043 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 05:53:30.686065 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 05:53:30.686086 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 05:53:30.686106 systemd[1]: Reached target slices.target - Slice Units. Nov 6 05:53:30.687167 systemd[1]: Reached target swap.target - Swaps. Nov 6 05:53:30.687199 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 05:53:30.687228 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 05:53:30.687250 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 05:53:30.687271 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 05:53:30.687301 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 05:53:30.687323 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 05:53:30.687344 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 05:53:30.687364 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 05:53:30.687399 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 05:53:30.687422 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 05:53:30.687443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:53:30.687471 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 05:53:30.687505 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 05:53:30.687524 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 05:53:30.687544 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 05:53:30.687564 systemd[1]: Reached target machines.target - Containers. Nov 6 05:53:30.687608 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 05:53:30.688249 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 05:53:30.688274 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 05:53:30.688295 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 05:53:30.688316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:53:30.688336 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 05:53:30.688358 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:53:30.688378 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 05:53:30.688397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:53:30.688435 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 05:53:30.688457 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 05:53:30.688479 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 05:53:30.688500 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 05:53:30.688541 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 05:53:30.688564 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:53:30.688586 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 05:53:30.688620 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 05:53:30.688642 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 05:53:30.688663 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 05:53:30.688684 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 05:53:30.688704 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 05:53:30.688725 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:53:30.688758 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 05:53:30.688791 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 05:53:30.688814 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 05:53:30.688841 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 05:53:30.688873 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 05:53:30.688915 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 05:53:30.688938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 05:53:30.688981 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 05:53:30.689003 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 05:53:30.689038 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:53:30.689061 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:53:30.689082 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:53:30.689102 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:53:30.690195 kernel: fuse: init (API version 7.41) Nov 6 05:53:30.690227 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 05:53:30.690290 systemd-journald[1262]: Collecting audit messages is disabled. Nov 6 05:53:30.690340 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:53:30.690363 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:53:30.690384 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 05:53:30.690404 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 05:53:30.690438 systemd-journald[1262]: Journal started Nov 6 05:53:30.690472 systemd-journald[1262]: Runtime Journal (/run/log/journal/c28646dbafcd406dabbb82b8c9f433e2) is 4.7M, max 37.8M, 33M free. Nov 6 05:53:30.276279 systemd[1]: Queued start job for default target multi-user.target. Nov 6 05:53:30.300681 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 05:53:30.301446 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 05:53:30.695202 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 05:53:30.698653 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 05:53:30.701371 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 05:53:30.702568 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 05:53:30.705754 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 05:53:30.725802 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 05:53:30.732160 kernel: ACPI: bus type drm_connector registered Nov 6 05:53:30.734284 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 05:53:30.737332 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 05:53:30.739230 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 05:53:30.739278 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 05:53:30.742712 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 05:53:30.759307 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 05:53:30.761980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:53:30.764168 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 05:53:30.768851 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 05:53:30.769744 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 05:53:30.772380 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 05:53:30.773275 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 05:53:30.777386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 05:53:30.783495 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 05:53:30.788472 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 05:53:30.792458 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 05:53:30.793235 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 05:53:30.794585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 05:53:30.796552 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 05:53:30.819874 systemd-journald[1262]: Time spent on flushing to /var/log/journal/c28646dbafcd406dabbb82b8c9f433e2 is 83.520ms for 1154 entries. Nov 6 05:53:30.819874 systemd-journald[1262]: System Journal (/var/log/journal/c28646dbafcd406dabbb82b8c9f433e2) is 8M, max 588.1M, 580.1M free. Nov 6 05:53:30.914962 systemd-journald[1262]: Received client request to flush runtime journal. Nov 6 05:53:30.915057 kernel: loop1: detected capacity change from 0 to 111544 Nov 6 05:53:30.826057 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 05:53:30.827629 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 05:53:30.832742 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 05:53:30.910342 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 05:53:30.917428 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 05:53:30.923840 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 05:53:30.944664 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 05:53:30.950322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 05:53:30.966174 kernel: loop2: detected capacity change from 0 to 229808 Nov 6 05:53:31.014168 kernel: loop3: detected capacity change from 0 to 119080 Nov 6 05:53:31.042548 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Nov 6 05:53:31.042578 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Nov 6 05:53:31.060542 kernel: loop4: detected capacity change from 0 to 8 Nov 6 05:53:31.063242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 05:53:31.109474 kernel: loop5: detected capacity change from 0 to 111544 Nov 6 05:53:31.150372 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 05:53:31.179161 kernel: loop6: detected capacity change from 0 to 229808 Nov 6 05:53:31.209488 kernel: loop7: detected capacity change from 0 to 119080 Nov 6 05:53:31.272171 kernel: loop1: detected capacity change from 0 to 8 Nov 6 05:53:31.273070 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 6 05:53:31.274437 (sd-merge)[1324]: Merged extensions into '/usr'. Nov 6 05:53:31.280036 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 05:53:31.280181 systemd[1]: Reloading... Nov 6 05:53:31.404173 zram_generator::config[1350]: No configuration found. Nov 6 05:53:31.736166 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 05:53:31.853128 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 05:53:31.853551 systemd[1]: Reloading finished in 572 ms. Nov 6 05:53:31.881092 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 05:53:31.885590 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 05:53:31.887013 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 05:53:31.913302 systemd[1]: Starting ensure-sysext.service... Nov 6 05:53:31.917291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 05:53:31.920023 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 05:53:31.938640 systemd[1]: Reload requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Nov 6 05:53:31.938675 systemd[1]: Reloading... Nov 6 05:53:31.977651 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 6 05:53:31.978231 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 6 05:53:31.978822 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 05:53:31.979615 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 05:53:31.981680 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 05:53:31.982098 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Nov 6 05:53:31.985289 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Nov 6 05:53:31.995456 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Nov 6 05:53:31.998940 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 05:53:31.998960 systemd-tmpfiles[1409]: Skipping /boot Nov 6 05:53:32.040846 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 05:53:32.040868 systemd-tmpfiles[1409]: Skipping /boot Nov 6 05:53:32.064169 zram_generator::config[1435]: No configuration found. Nov 6 05:53:32.288157 kernel: mousedev: PS/2 mouse device common for all mice Nov 6 05:53:32.327184 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 6 05:53:32.335159 kernel: ACPI: button: Power Button [PWRF] Nov 6 05:53:32.494192 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 6 05:53:32.503168 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 6 05:53:32.586079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 05:53:32.588969 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 6 05:53:32.589785 systemd[1]: Reloading finished in 649 ms. Nov 6 05:53:32.609060 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 05:53:32.629206 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 05:53:32.741375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:53:32.745898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 05:53:32.751562 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 05:53:32.752635 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 05:53:32.754532 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:53:32.759756 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 05:53:32.763980 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 05:53:32.764900 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:53:32.769514 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 05:53:32.770330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:53:32.777548 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 05:53:32.785751 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 05:53:32.793350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 05:53:32.798508 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 05:53:32.809503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 05:53:32.811228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:53:32.817069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:53:32.818052 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:53:32.832943 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:53:32.833511 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 05:53:32.836690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 05:53:32.860772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 05:53:32.863203 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 05:53:32.863270 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 05:53:32.879417 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 05:53:32.880211 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 6 05:53:32.884061 systemd[1]: Finished ensure-sysext.service. Nov 6 05:53:32.886430 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 05:53:32.887973 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 05:53:32.888914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 05:53:32.891490 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 05:53:32.891796 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 05:53:32.893904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 05:53:32.894203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 05:53:32.895312 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 05:53:32.895580 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 05:53:32.905810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 05:53:32.929539 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 05:53:32.929775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 05:53:32.934508 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 05:53:32.935324 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 05:53:32.937809 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 05:53:32.963602 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 05:53:32.967327 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 05:53:32.995357 augenrules[1575]: No rules Nov 6 05:53:32.997385 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 05:53:32.998569 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 05:53:33.001924 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 05:53:33.006162 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 05:53:33.159253 systemd-networkd[1532]: lo: Link UP Nov 6 05:53:33.159828 systemd-networkd[1532]: lo: Gained carrier Nov 6 05:53:33.162337 systemd-networkd[1532]: Enumeration completed Nov 6 05:53:33.162855 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 05:53:33.164560 systemd-networkd[1532]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:53:33.164954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 05:53:33.166181 systemd-networkd[1532]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 05:53:33.170432 systemd-networkd[1532]: eth0: Link UP Nov 6 05:53:33.170872 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 05:53:33.174897 systemd-networkd[1532]: eth0: Gained carrier Nov 6 05:53:33.174959 systemd-networkd[1532]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 05:53:33.176019 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 05:53:33.179459 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 05:53:33.179566 systemd-resolved[1534]: Positive Trust Anchors: Nov 6 05:53:33.179583 systemd-resolved[1534]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 05:53:33.179629 systemd-resolved[1534]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 05:53:33.181346 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 05:53:33.186517 systemd-resolved[1534]: Using system hostname 'srv-dhf6q.gb1.brightbox.com'. Nov 6 05:53:33.189258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 05:53:33.191296 systemd[1]: Reached target network.target - Network. Nov 6 05:53:33.191952 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 05:53:33.193231 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 05:53:33.194493 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 05:53:33.195334 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 05:53:33.196222 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 6 05:53:33.197445 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 05:53:33.198435 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 05:53:33.199241 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 05:53:33.200191 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 05:53:33.200249 systemd[1]: Reached target paths.target - Path Units. Nov 6 05:53:33.200838 systemd-networkd[1532]: eth0: DHCPv4 address 10.230.27.98/30, gateway 10.230.27.97 acquired from 10.230.27.97 Nov 6 05:53:33.201568 systemd[1]: Reached target timers.target - Timer Units. Nov 6 05:53:33.202986 systemd-timesyncd[1561]: Network configuration changed, trying to establish connection. Nov 6 05:53:33.203588 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 05:53:33.207611 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 05:53:33.212483 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 05:53:33.213631 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 05:53:33.214468 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 05:53:33.217798 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 05:53:33.219028 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 05:53:33.220852 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 05:53:33.222860 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 05:53:33.223572 systemd[1]: Reached target basic.target - Basic System. Nov 6 05:53:33.224315 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 05:53:33.224372 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 05:53:33.227249 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 05:53:33.229854 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 6 05:53:33.234443 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 05:53:33.241110 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 05:53:33.248458 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 05:53:33.253397 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 05:53:33.254226 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 05:53:33.262500 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 6 05:53:33.267618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 05:53:33.273895 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 05:53:33.278342 jq[1601]: false Nov 6 05:53:33.281250 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:33.283323 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 05:53:33.291488 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 05:53:33.298598 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing passwd entry cache Nov 6 05:53:33.300195 oslogin_cache_refresh[1603]: Refreshing passwd entry cache Nov 6 05:53:33.306791 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 05:53:33.309456 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 05:53:33.311256 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 05:53:33.313466 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 05:53:33.316576 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 05:53:33.320561 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 05:53:33.326427 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 05:53:33.328679 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 05:53:33.334187 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 05:53:33.345440 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting users, quitting Nov 6 05:53:33.345440 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 05:53:33.345440 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing group entry cache Nov 6 05:53:33.344601 oslogin_cache_refresh[1603]: Failure getting users, quitting Nov 6 05:53:33.344645 oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 6 05:53:33.344755 oslogin_cache_refresh[1603]: Refreshing group entry cache Nov 6 05:53:33.351163 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting groups, quitting Nov 6 05:53:33.352168 oslogin_cache_refresh[1603]: Failure getting groups, quitting Nov 6 05:53:33.353452 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 05:53:33.352210 oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 6 05:53:33.354648 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 6 05:53:33.359817 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 6 05:53:33.375680 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 05:53:33.376088 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 05:53:33.377624 extend-filesystems[1602]: Found /dev/vda6 Nov 6 05:53:33.382478 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 05:53:33.383648 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 05:53:33.396024 extend-filesystems[1602]: Found /dev/vda9 Nov 6 05:53:33.407892 extend-filesystems[1602]: Checking size of /dev/vda9 Nov 6 05:53:33.432890 update_engine[1617]: I20251106 05:53:33.432729 1617 main.cc:92] Flatcar Update Engine starting Nov 6 05:53:33.438020 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 05:53:33.437674 dbus-daemon[1599]: [system] SELinux support is enabled Nov 6 05:53:33.447550 dbus-daemon[1599]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1532 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 6 05:53:33.447822 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 05:53:33.448410 jq[1618]: true Nov 6 05:53:33.447886 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 05:53:33.450314 tar[1623]: linux-amd64/LICENSE Nov 6 05:53:33.450314 tar[1623]: linux-amd64/helm Nov 6 05:53:33.449584 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 05:53:33.449621 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 05:53:33.454349 dbus-daemon[1599]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 6 05:53:33.460566 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 6 05:53:33.464333 update_engine[1617]: I20251106 05:53:33.461561 1617 update_check_scheduler.cc:74] Next update check in 4m35s Nov 6 05:53:33.461952 systemd[1]: Started update-engine.service - Update Engine. Nov 6 05:53:33.466851 extend-filesystems[1602]: Resized partition /dev/vda9 Nov 6 05:53:33.478173 extend-filesystems[1647]: resize2fs 1.47.3 (8-Jul-2025) Nov 6 05:53:33.492660 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Nov 6 05:53:33.492752 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 05:53:33.525685 jq[1642]: true Nov 6 05:53:33.605882 systemd-logind[1611]: Watching system buttons on /dev/input/event3 (Power Button) Nov 6 05:53:33.605915 systemd-logind[1611]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 6 05:53:33.610240 systemd-logind[1611]: New seat seat0. Nov 6 05:53:33.621029 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 05:53:33.849442 locksmithd[1648]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 05:53:33.882615 bash[1670]: Updated "/home/core/.ssh/authorized_keys" Nov 6 05:53:33.876950 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 05:53:33.891181 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Nov 6 05:53:33.890153 systemd[1]: Starting sshkeys.service... Nov 6 05:53:33.916876 extend-filesystems[1647]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 05:53:33.916876 extend-filesystems[1647]: old_desc_blocks = 1, new_desc_blocks = 7 Nov 6 05:53:33.916876 extend-filesystems[1647]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Nov 6 05:53:33.915698 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 05:53:33.941546 extend-filesystems[1602]: Resized filesystem in /dev/vda9 Nov 6 05:53:33.916092 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 05:53:33.932716 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 6 05:53:33.937673 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 6 05:53:33.947772 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 6 05:53:33.951867 dbus-daemon[1599]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 6 05:53:33.957283 dbus-daemon[1599]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1646 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 6 05:53:33.968906 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:33.966812 systemd[1]: Starting polkit.service - Authorization Manager... Nov 6 05:53:34.043240 containerd[1638]: time="2025-11-06T05:53:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 6 05:53:34.044878 containerd[1638]: time="2025-11-06T05:53:34.044833457Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 6 05:53:34.090470 containerd[1638]: time="2025-11-06T05:53:34.090397419Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.282µs" Nov 6 05:53:34.090997 containerd[1638]: time="2025-11-06T05:53:34.090811205Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.090918853Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.093185434Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.093516644Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.093545994Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.093667806Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.093688917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.094064732Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.094087050Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.094105036Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 6 05:53:34.094154 containerd[1638]: time="2025-11-06T05:53:34.094120384Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.098682 containerd[1638]: time="2025-11-06T05:53:34.098383383Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.098682 containerd[1638]: time="2025-11-06T05:53:34.098414542Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 6 05:53:34.098682 containerd[1638]: time="2025-11-06T05:53:34.098621840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.100578 containerd[1638]: time="2025-11-06T05:53:34.100510909Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.101320 containerd[1638]: time="2025-11-06T05:53:34.101129506Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 6 05:53:34.101320 containerd[1638]: time="2025-11-06T05:53:34.101191293Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 6 05:53:34.101320 containerd[1638]: time="2025-11-06T05:53:34.101269020Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 6 05:53:34.102366 containerd[1638]: time="2025-11-06T05:53:34.102295611Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 6 05:53:34.106500 containerd[1638]: time="2025-11-06T05:53:34.106195442Z" level=info msg="metadata content store policy set" policy=shared Nov 6 05:53:34.113932 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117204496Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117302450Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117446688Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117469178Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117489725Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117513312Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117543374Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117563921Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117590574Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117616221Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117635170Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117652527Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117670740Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 6 05:53:34.118162 containerd[1638]: time="2025-11-06T05:53:34.117696520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.117899837Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.117957223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.117981336Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.118008116Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.118026885Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.118043815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.118061546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.118090546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 6 05:53:34.118692 containerd[1638]: time="2025-11-06T05:53:34.118110703Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 6 05:53:34.121791 containerd[1638]: time="2025-11-06T05:53:34.121180774Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 6 05:53:34.121791 containerd[1638]: time="2025-11-06T05:53:34.121252889Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 6 05:53:34.121791 containerd[1638]: time="2025-11-06T05:53:34.121320503Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 6 05:53:34.121791 containerd[1638]: time="2025-11-06T05:53:34.121420670Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 6 05:53:34.121791 containerd[1638]: time="2025-11-06T05:53:34.121461671Z" level=info msg="Start snapshots syncer" Nov 6 05:53:34.121791 containerd[1638]: time="2025-11-06T05:53:34.121522589Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 6 05:53:34.122477 containerd[1638]: time="2025-11-06T05:53:34.122425110Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125215675Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125470210Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125709421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125743057Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125770823Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125789498Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125851984Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125874452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125902559Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125932506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.125951011Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.126024215Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 05:53:34.126183 containerd[1638]: time="2025-11-06T05:53:34.126049469Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126663696Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126703973Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126722135Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126764899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126784699Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126896833Z" level=info msg="runtime interface created" Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126926353Z" level=info msg="created NRI interface" Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126946551Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.126980237Z" level=info msg="Connect containerd service" Nov 6 05:53:34.127328 containerd[1638]: time="2025-11-06T05:53:34.127019049Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 05:53:34.132529 containerd[1638]: time="2025-11-06T05:53:34.131422234Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 05:53:34.161900 polkitd[1681]: Started polkitd version 126 Nov 6 05:53:34.180926 polkitd[1681]: Loading rules from directory /etc/polkit-1/rules.d Nov 6 05:53:34.181384 polkitd[1681]: Loading rules from directory /run/polkit-1/rules.d Nov 6 05:53:34.181472 polkitd[1681]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 05:53:34.181827 polkitd[1681]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 6 05:53:34.181871 polkitd[1681]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 6 05:53:34.181944 polkitd[1681]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 6 05:53:34.190499 polkitd[1681]: Finished loading, compiling and executing 2 rules Nov 6 05:53:34.193636 systemd[1]: Started polkit.service - Authorization Manager. Nov 6 05:53:34.196970 dbus-daemon[1599]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 6 05:53:34.202024 sshd_keygen[1640]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 05:53:34.202323 polkitd[1681]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 6 05:53:34.245250 systemd-hostnamed[1646]: Hostname set to (static) Nov 6 05:53:34.258411 systemd-timesyncd[1561]: Contacted time server 176.58.109.184:123 (0.flatcar.pool.ntp.org). Nov 6 05:53:34.258513 systemd-timesyncd[1561]: Initial clock synchronization to Thu 2025-11-06 05:53:34.519500 UTC. Nov 6 05:53:34.272974 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 05:53:34.280006 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 05:53:34.285456 systemd[1]: Started sshd@0-10.230.27.98:22-139.178.68.195:35024.service - OpenSSH per-connection server daemon (139.178.68.195:35024). Nov 6 05:53:34.292078 containerd[1638]: time="2025-11-06T05:53:34.292031696Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 05:53:34.295261 containerd[1638]: time="2025-11-06T05:53:34.295208403Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295339594Z" level=info msg="Start subscribing containerd event" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295392060Z" level=info msg="Start recovering state" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295565412Z" level=info msg="Start event monitor" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295587147Z" level=info msg="Start cni network conf syncer for default" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295599030Z" level=info msg="Start streaming server" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295619065Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295633865Z" level=info msg="runtime interface starting up..." Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295644177Z" level=info msg="starting plugins..." Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295672694Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 6 05:53:34.297161 containerd[1638]: time="2025-11-06T05:53:34.295882096Z" level=info msg="containerd successfully booted in 0.254851s" Nov 6 05:53:34.296284 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 05:53:34.331531 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 05:53:34.332079 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 05:53:34.337352 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 05:53:34.369168 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:34.372838 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 05:53:34.384842 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 05:53:34.389373 systemd-networkd[1532]: eth0: Gained IPv6LL Nov 6 05:53:34.391655 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 6 05:53:34.393938 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 05:53:34.398336 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 05:53:34.402003 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 05:53:34.407490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:53:34.412705 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 05:53:34.476958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 05:53:34.636626 tar[1623]: linux-amd64/README.md Nov 6 05:53:34.665092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 05:53:35.004198 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:35.164741 sshd[1717]: Accepted publickey for core from 139.178.68.195 port 35024 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:35.166205 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:35.194437 systemd-logind[1611]: New session 1 of user core. Nov 6 05:53:35.196287 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 05:53:35.200697 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 05:53:35.239473 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 05:53:35.244816 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 05:53:35.264197 (systemd)[1746]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 05:53:35.272191 systemd-logind[1611]: New session c1 of user core. Nov 6 05:53:35.464969 systemd[1746]: Queued start job for default target default.target. Nov 6 05:53:35.469666 systemd[1746]: Created slice app.slice - User Application Slice. Nov 6 05:53:35.470000 systemd[1746]: Reached target paths.target - Paths. Nov 6 05:53:35.470139 systemd[1746]: Reached target timers.target - Timers. Nov 6 05:53:35.474292 systemd[1746]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 05:53:35.490011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:53:35.496440 systemd[1746]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 05:53:35.496669 systemd[1746]: Reached target sockets.target - Sockets. Nov 6 05:53:35.496744 systemd[1746]: Reached target basic.target - Basic System. Nov 6 05:53:35.496824 systemd[1746]: Reached target default.target - Main User Target. Nov 6 05:53:35.496892 systemd[1746]: Startup finished in 212ms. Nov 6 05:53:35.497223 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 05:53:35.514792 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:53:35.515285 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 05:53:35.898849 systemd-networkd[1532]: eth0: Ignoring DHCPv6 address 2a02:1348:179:86d8:24:19ff:fee6:1b62/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:86d8:24:19ff:fee6:1b62/64 assigned by NDisc. Nov 6 05:53:35.898863 systemd-networkd[1532]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 6 05:53:35.995281 systemd[1]: Started sshd@1-10.230.27.98:22-139.178.68.195:35032.service - OpenSSH per-connection server daemon (139.178.68.195:35032). Nov 6 05:53:36.179007 kubelet[1758]: E1106 05:53:36.178779 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:53:36.182673 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:53:36.182959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:53:36.183690 systemd[1]: kubelet.service: Consumed 1.101s CPU time, 268.3M memory peak. Nov 6 05:53:36.385242 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:36.811183 sshd[1769]: Accepted publickey for core from 139.178.68.195 port 35032 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:36.812081 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:36.819573 systemd-logind[1611]: New session 2 of user core. Nov 6 05:53:36.831631 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 05:53:37.019207 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:37.267626 sshd[1775]: Connection closed by 139.178.68.195 port 35032 Nov 6 05:53:37.267370 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:37.272587 systemd[1]: sshd@1-10.230.27.98:22-139.178.68.195:35032.service: Deactivated successfully. Nov 6 05:53:37.275370 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 05:53:37.278481 systemd-logind[1611]: Session 2 logged out. Waiting for processes to exit. Nov 6 05:53:37.279864 systemd-logind[1611]: Removed session 2. Nov 6 05:53:37.429519 systemd[1]: Started sshd@2-10.230.27.98:22-139.178.68.195:35034.service - OpenSSH per-connection server daemon (139.178.68.195:35034). Nov 6 05:53:38.233449 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 35034 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:38.235440 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:38.242636 systemd-logind[1611]: New session 3 of user core. Nov 6 05:53:38.258811 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 05:53:38.688527 sshd[1785]: Connection closed by 139.178.68.195 port 35034 Nov 6 05:53:38.689456 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:38.694849 systemd[1]: sshd@2-10.230.27.98:22-139.178.68.195:35034.service: Deactivated successfully. Nov 6 05:53:38.698315 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 05:53:38.700708 systemd-logind[1611]: Session 3 logged out. Waiting for processes to exit. Nov 6 05:53:38.703448 systemd-logind[1611]: Removed session 3. Nov 6 05:53:39.690386 login[1726]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 05:53:39.693614 login[1727]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 6 05:53:39.701334 systemd-logind[1611]: New session 4 of user core. Nov 6 05:53:39.718655 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 05:53:39.724430 systemd-logind[1611]: New session 5 of user core. Nov 6 05:53:39.733484 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 05:53:40.401190 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:40.411762 coreos-metadata[1598]: Nov 06 05:53:40.411 WARN failed to locate config-drive, using the metadata service API instead Nov 6 05:53:40.441601 coreos-metadata[1598]: Nov 06 05:53:40.441 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 6 05:53:40.452002 coreos-metadata[1598]: Nov 06 05:53:40.451 INFO Fetch failed with 404: resource not found Nov 6 05:53:40.452233 coreos-metadata[1598]: Nov 06 05:53:40.452 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 6 05:53:40.452583 coreos-metadata[1598]: Nov 06 05:53:40.452 INFO Fetch successful Nov 6 05:53:40.452730 coreos-metadata[1598]: Nov 06 05:53:40.452 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 6 05:53:40.469241 coreos-metadata[1598]: Nov 06 05:53:40.468 INFO Fetch successful Nov 6 05:53:40.469407 coreos-metadata[1598]: Nov 06 05:53:40.469 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 6 05:53:40.490850 coreos-metadata[1598]: Nov 06 05:53:40.490 INFO Fetch successful Nov 6 05:53:40.491272 coreos-metadata[1598]: Nov 06 05:53:40.491 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 6 05:53:40.505703 coreos-metadata[1598]: Nov 06 05:53:40.505 INFO Fetch successful Nov 6 05:53:40.506014 coreos-metadata[1598]: Nov 06 05:53:40.505 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 6 05:53:40.524770 coreos-metadata[1598]: Nov 06 05:53:40.524 INFO Fetch successful Nov 6 05:53:40.565738 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 6 05:53:40.566916 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 05:53:41.038704 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Nov 6 05:53:41.053100 coreos-metadata[1680]: Nov 06 05:53:41.052 WARN failed to locate config-drive, using the metadata service API instead Nov 6 05:53:41.077170 coreos-metadata[1680]: Nov 06 05:53:41.076 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 6 05:53:41.104951 coreos-metadata[1680]: Nov 06 05:53:41.104 INFO Fetch successful Nov 6 05:53:41.105356 coreos-metadata[1680]: Nov 06 05:53:41.105 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 6 05:53:41.135525 coreos-metadata[1680]: Nov 06 05:53:41.135 INFO Fetch successful Nov 6 05:53:41.138246 unknown[1680]: wrote ssh authorized keys file for user: core Nov 6 05:53:41.164970 update-ssh-keys[1825]: Updated "/home/core/.ssh/authorized_keys" Nov 6 05:53:41.166072 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 6 05:53:41.170601 systemd[1]: Finished sshkeys.service. Nov 6 05:53:41.173631 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 05:53:41.173852 systemd[1]: Startup finished in 3.526s (kernel) + 14.572s (initrd) + 11.819s (userspace) = 29.918s. Nov 6 05:53:46.434667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 05:53:46.437515 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:53:46.663049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:53:46.675691 (kubelet)[1836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:53:46.736152 kubelet[1836]: E1106 05:53:46.735887 1836 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:53:46.741771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:53:46.742265 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:53:46.743347 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109.1M memory peak. Nov 6 05:53:48.981481 systemd[1]: Started sshd@3-10.230.27.98:22-139.178.68.195:46836.service - OpenSSH per-connection server daemon (139.178.68.195:46836). Nov 6 05:53:49.792084 sshd[1844]: Accepted publickey for core from 139.178.68.195 port 46836 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:49.794866 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:49.803851 systemd-logind[1611]: New session 6 of user core. Nov 6 05:53:49.814548 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 05:53:50.248824 sshd[1847]: Connection closed by 139.178.68.195 port 46836 Nov 6 05:53:50.247795 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:50.253420 systemd[1]: sshd@3-10.230.27.98:22-139.178.68.195:46836.service: Deactivated successfully. Nov 6 05:53:50.255849 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 05:53:50.258803 systemd-logind[1611]: Session 6 logged out. Waiting for processes to exit. Nov 6 05:53:50.260433 systemd-logind[1611]: Removed session 6. Nov 6 05:53:50.413012 systemd[1]: Started sshd@4-10.230.27.98:22-139.178.68.195:46846.service - OpenSSH per-connection server daemon (139.178.68.195:46846). Nov 6 05:53:51.215258 sshd[1853]: Accepted publickey for core from 139.178.68.195 port 46846 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:51.217204 sshd-session[1853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:51.226081 systemd-logind[1611]: New session 7 of user core. Nov 6 05:53:51.235382 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 05:53:51.664273 sshd[1856]: Connection closed by 139.178.68.195 port 46846 Nov 6 05:53:51.665275 sshd-session[1853]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:51.670262 systemd[1]: sshd@4-10.230.27.98:22-139.178.68.195:46846.service: Deactivated successfully. Nov 6 05:53:51.673024 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 05:53:51.675775 systemd-logind[1611]: Session 7 logged out. Waiting for processes to exit. Nov 6 05:53:51.677707 systemd-logind[1611]: Removed session 7. Nov 6 05:53:51.827602 systemd[1]: Started sshd@5-10.230.27.98:22-139.178.68.195:46860.service - OpenSSH per-connection server daemon (139.178.68.195:46860). Nov 6 05:53:52.630092 sshd[1862]: Accepted publickey for core from 139.178.68.195 port 46860 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:52.631877 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:52.639021 systemd-logind[1611]: New session 8 of user core. Nov 6 05:53:52.648375 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 05:53:53.084331 sshd[1865]: Connection closed by 139.178.68.195 port 46860 Nov 6 05:53:53.085434 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:53.091061 systemd[1]: sshd@5-10.230.27.98:22-139.178.68.195:46860.service: Deactivated successfully. Nov 6 05:53:53.091938 systemd-logind[1611]: Session 8 logged out. Waiting for processes to exit. Nov 6 05:53:53.093648 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 05:53:53.096834 systemd-logind[1611]: Removed session 8. Nov 6 05:53:53.248132 systemd[1]: Started sshd@6-10.230.27.98:22-139.178.68.195:34470.service - OpenSSH per-connection server daemon (139.178.68.195:34470). Nov 6 05:53:54.040764 sshd[1871]: Accepted publickey for core from 139.178.68.195 port 34470 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:54.042512 sshd-session[1871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:54.049185 systemd-logind[1611]: New session 9 of user core. Nov 6 05:53:54.057362 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 05:53:54.366426 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 05:53:54.366955 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:53:54.386443 sudo[1875]: pam_unix(sudo:session): session closed for user root Nov 6 05:53:54.536326 sshd[1874]: Connection closed by 139.178.68.195 port 34470 Nov 6 05:53:54.535802 sshd-session[1871]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:54.541918 systemd-logind[1611]: Session 9 logged out. Waiting for processes to exit. Nov 6 05:53:54.542816 systemd[1]: sshd@6-10.230.27.98:22-139.178.68.195:34470.service: Deactivated successfully. Nov 6 05:53:54.545670 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 05:53:54.549105 systemd-logind[1611]: Removed session 9. Nov 6 05:53:54.697173 systemd[1]: Started sshd@7-10.230.27.98:22-139.178.68.195:34484.service - OpenSSH per-connection server daemon (139.178.68.195:34484). Nov 6 05:53:55.499087 sshd[1881]: Accepted publickey for core from 139.178.68.195 port 34484 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:55.501093 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:55.509249 systemd-logind[1611]: New session 10 of user core. Nov 6 05:53:55.527692 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 05:53:55.803886 sudo[1886]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 05:53:55.805453 sudo[1886]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:53:55.812505 sudo[1886]: pam_unix(sudo:session): session closed for user root Nov 6 05:53:55.821096 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 05:53:55.822011 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:53:55.837821 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 05:53:55.892582 augenrules[1908]: No rules Nov 6 05:53:55.893903 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 05:53:55.894294 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 05:53:55.896417 sudo[1885]: pam_unix(sudo:session): session closed for user root Nov 6 05:53:56.044611 sshd[1884]: Connection closed by 139.178.68.195 port 34484 Nov 6 05:53:56.045507 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Nov 6 05:53:56.051025 systemd-logind[1611]: Session 10 logged out. Waiting for processes to exit. Nov 6 05:53:56.051495 systemd[1]: sshd@7-10.230.27.98:22-139.178.68.195:34484.service: Deactivated successfully. Nov 6 05:53:56.053939 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 05:53:56.056435 systemd-logind[1611]: Removed session 10. Nov 6 05:53:56.208571 systemd[1]: Started sshd@8-10.230.27.98:22-139.178.68.195:34500.service - OpenSSH per-connection server daemon (139.178.68.195:34500). Nov 6 05:53:56.840257 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 05:53:56.843873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:53:56.997926 sshd[1917]: Accepted publickey for core from 139.178.68.195 port 34500 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:53:56.999613 sshd-session[1917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:53:57.012232 systemd-logind[1611]: New session 11 of user core. Nov 6 05:53:57.019874 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 05:53:57.023282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:53:57.034868 (kubelet)[1927]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:53:57.136823 kubelet[1927]: E1106 05:53:57.136757 1927 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:53:57.139449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:53:57.139707 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:53:57.140271 systemd[1]: kubelet.service: Consumed 220ms CPU time, 107.7M memory peak. Nov 6 05:53:57.301983 sudo[1936]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 05:53:57.303073 sudo[1936]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 05:53:57.844452 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 05:53:57.861408 (dockerd)[1953]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 05:53:58.241896 dockerd[1953]: time="2025-11-06T05:53:58.241725431Z" level=info msg="Starting up" Nov 6 05:53:58.243597 dockerd[1953]: time="2025-11-06T05:53:58.243558267Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 6 05:53:58.264772 dockerd[1953]: time="2025-11-06T05:53:58.264697976Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 6 05:53:58.287488 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2112367518-merged.mount: Deactivated successfully. Nov 6 05:53:58.299588 systemd[1]: var-lib-docker-metacopy\x2dcheck2548356241-merged.mount: Deactivated successfully. Nov 6 05:53:58.328235 dockerd[1953]: time="2025-11-06T05:53:58.328129453Z" level=info msg="Loading containers: start." Nov 6 05:53:58.346555 kernel: Initializing XFRM netlink socket Nov 6 05:53:58.693465 systemd-networkd[1532]: docker0: Link UP Nov 6 05:53:58.698008 dockerd[1953]: time="2025-11-06T05:53:58.697928975Z" level=info msg="Loading containers: done." Nov 6 05:53:58.732176 dockerd[1953]: time="2025-11-06T05:53:58.732078262Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 05:53:58.732462 dockerd[1953]: time="2025-11-06T05:53:58.732237884Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 6 05:53:58.732535 dockerd[1953]: time="2025-11-06T05:53:58.732502425Z" level=info msg="Initializing buildkit" Nov 6 05:53:58.760853 dockerd[1953]: time="2025-11-06T05:53:58.760763022Z" level=info msg="Completed buildkit initialization" Nov 6 05:53:58.770482 dockerd[1953]: time="2025-11-06T05:53:58.770398060Z" level=info msg="Daemon has completed initialization" Nov 6 05:53:58.770702 dockerd[1953]: time="2025-11-06T05:53:58.770493825Z" level=info msg="API listen on /run/docker.sock" Nov 6 05:53:58.771361 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 05:53:59.285967 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3914989845-merged.mount: Deactivated successfully. Nov 6 05:54:00.051070 containerd[1638]: time="2025-11-06T05:54:00.050951739Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 6 05:54:01.198234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162969227.mount: Deactivated successfully. Nov 6 05:54:03.266230 containerd[1638]: time="2025-11-06T05:54:03.266151897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:03.267821 containerd[1638]: time="2025-11-06T05:54:03.267790571Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=28442726" Nov 6 05:54:03.269595 containerd[1638]: time="2025-11-06T05:54:03.268288322Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:03.273470 containerd[1638]: time="2025-11-06T05:54:03.273428568Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:03.274952 containerd[1638]: time="2025-11-06T05:54:03.274915866Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 3.223823553s" Nov 6 05:54:03.275162 containerd[1638]: time="2025-11-06T05:54:03.275120215Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 6 05:54:03.277258 containerd[1638]: time="2025-11-06T05:54:03.277221960Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 6 05:54:05.925107 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 6 05:54:06.596900 containerd[1638]: time="2025-11-06T05:54:06.596785949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:06.598534 containerd[1638]: time="2025-11-06T05:54:06.598256245Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26012689" Nov 6 05:54:06.599355 containerd[1638]: time="2025-11-06T05:54:06.599318225Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:06.602684 containerd[1638]: time="2025-11-06T05:54:06.602649240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:06.604221 containerd[1638]: time="2025-11-06T05:54:06.604182304Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 3.326917768s" Nov 6 05:54:06.604313 containerd[1638]: time="2025-11-06T05:54:06.604227469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 6 05:54:06.605736 containerd[1638]: time="2025-11-06T05:54:06.605534382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 6 05:54:07.350551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 6 05:54:07.353405 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:07.562458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:07.581347 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:54:07.690595 kubelet[2237]: E1106 05:54:07.690421 2237 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:54:07.697736 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:54:07.698009 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:54:07.698810 systemd[1]: kubelet.service: Consumed 252ms CPU time, 108M memory peak. Nov 6 05:54:09.043453 containerd[1638]: time="2025-11-06T05:54:09.043322510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:09.048888 containerd[1638]: time="2025-11-06T05:54:09.048835186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20147431" Nov 6 05:54:09.063395 containerd[1638]: time="2025-11-06T05:54:09.063289167Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:09.068106 containerd[1638]: time="2025-11-06T05:54:09.068067087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:09.070740 containerd[1638]: time="2025-11-06T05:54:09.070654459Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 2.465043732s" Nov 6 05:54:09.070912 containerd[1638]: time="2025-11-06T05:54:09.070864659Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 6 05:54:09.071824 containerd[1638]: time="2025-11-06T05:54:09.071656314Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 6 05:54:10.946708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475679000.mount: Deactivated successfully. Nov 6 05:54:13.409474 containerd[1638]: time="2025-11-06T05:54:13.409402088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:13.410738 containerd[1638]: time="2025-11-06T05:54:13.410489239Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31925747" Nov 6 05:54:13.411455 containerd[1638]: time="2025-11-06T05:54:13.411417510Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:13.413867 containerd[1638]: time="2025-11-06T05:54:13.413829034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:13.414856 containerd[1638]: time="2025-11-06T05:54:13.414815084Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 4.343071287s" Nov 6 05:54:13.414991 containerd[1638]: time="2025-11-06T05:54:13.414860507Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 6 05:54:13.416027 containerd[1638]: time="2025-11-06T05:54:13.415580167Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 6 05:54:14.247048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504720666.mount: Deactivated successfully. Nov 6 05:54:16.163322 containerd[1638]: time="2025-11-06T05:54:16.163236209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:16.165679 containerd[1638]: time="2025-11-06T05:54:16.165645811Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Nov 6 05:54:16.166781 containerd[1638]: time="2025-11-06T05:54:16.166455720Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:16.173366 containerd[1638]: time="2025-11-06T05:54:16.173316860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:16.174767 containerd[1638]: time="2025-11-06T05:54:16.174725581Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.759102695s" Nov 6 05:54:16.174845 containerd[1638]: time="2025-11-06T05:54:16.174775622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 6 05:54:16.176100 containerd[1638]: time="2025-11-06T05:54:16.175805750Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 05:54:17.256863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2025599524.mount: Deactivated successfully. Nov 6 05:54:17.265588 containerd[1638]: time="2025-11-06T05:54:17.265522138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:54:17.267116 containerd[1638]: time="2025-11-06T05:54:17.267082940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 6 05:54:17.268323 containerd[1638]: time="2025-11-06T05:54:17.268286619Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:54:17.272554 containerd[1638]: time="2025-11-06T05:54:17.271433015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 05:54:17.273096 containerd[1638]: time="2025-11-06T05:54:17.272500672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.096656202s" Nov 6 05:54:17.273196 containerd[1638]: time="2025-11-06T05:54:17.273104567Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 6 05:54:17.273955 containerd[1638]: time="2025-11-06T05:54:17.273904842Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 6 05:54:17.850073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 6 05:54:17.852977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:18.205356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:18.228096 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 05:54:18.233856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339835460.mount: Deactivated successfully. Nov 6 05:54:18.267685 update_engine[1617]: I20251106 05:54:18.267325 1617 update_attempter.cc:509] Updating boot flags... Nov 6 05:54:18.535967 kubelet[2326]: E1106 05:54:18.532760 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 05:54:18.537039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 05:54:18.537325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 05:54:18.539328 systemd[1]: kubelet.service: Consumed 260ms CPU time, 106M memory peak. Nov 6 05:54:22.128549 containerd[1638]: time="2025-11-06T05:54:22.128422747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:22.135432 containerd[1638]: time="2025-11-06T05:54:22.135383354Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57211813" Nov 6 05:54:22.137338 containerd[1638]: time="2025-11-06T05:54:22.136631988Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:22.151287 containerd[1638]: time="2025-11-06T05:54:22.151243886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:22.152943 containerd[1638]: time="2025-11-06T05:54:22.152908650Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.87894581s" Nov 6 05:54:22.153089 containerd[1638]: time="2025-11-06T05:54:22.153059643Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 6 05:54:27.444615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:27.445580 systemd[1]: kubelet.service: Consumed 260ms CPU time, 106M memory peak. Nov 6 05:54:27.450279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:27.490427 systemd[1]: Reload requested from client PID 2431 ('systemctl') (unit session-11.scope)... Nov 6 05:54:27.490491 systemd[1]: Reloading... Nov 6 05:54:27.661313 zram_generator::config[2482]: No configuration found. Nov 6 05:54:27.989602 systemd[1]: Reloading finished in 498 ms. Nov 6 05:54:28.062901 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:28.069247 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 05:54:28.069693 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:28.069751 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.1M memory peak. Nov 6 05:54:28.072234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:28.260275 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:28.274015 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 05:54:28.384735 kubelet[2545]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:54:28.384735 kubelet[2545]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 05:54:28.384735 kubelet[2545]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:54:28.387611 kubelet[2545]: I1106 05:54:28.387534 2545 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 05:54:28.752078 kubelet[2545]: I1106 05:54:28.751991 2545 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 05:54:28.752078 kubelet[2545]: I1106 05:54:28.752044 2545 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 05:54:28.752674 kubelet[2545]: I1106 05:54:28.752637 2545 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 05:54:28.798694 kubelet[2545]: I1106 05:54:28.798546 2545 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 05:54:28.804217 kubelet[2545]: E1106 05:54:28.804124 2545 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.27.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 6 05:54:28.830200 kubelet[2545]: I1106 05:54:28.829230 2545 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 05:54:28.842869 kubelet[2545]: I1106 05:54:28.842670 2545 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 05:54:28.847611 kubelet[2545]: I1106 05:54:28.847295 2545 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 05:54:28.850734 kubelet[2545]: I1106 05:54:28.847356 2545 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-dhf6q.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 05:54:28.851224 kubelet[2545]: I1106 05:54:28.851202 2545 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 05:54:28.851831 kubelet[2545]: I1106 05:54:28.851470 2545 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 05:54:28.851831 kubelet[2545]: I1106 05:54:28.851722 2545 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:54:28.857120 kubelet[2545]: I1106 05:54:28.857083 2545 kubelet.go:480] "Attempting to sync node with API server" Nov 6 05:54:28.857245 kubelet[2545]: I1106 05:54:28.857128 2545 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 05:54:28.857245 kubelet[2545]: I1106 05:54:28.857221 2545 kubelet.go:386] "Adding apiserver pod source" Nov 6 05:54:28.857338 kubelet[2545]: I1106 05:54:28.857258 2545 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 05:54:28.864087 kubelet[2545]: E1106 05:54:28.863488 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.27.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-dhf6q.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 05:54:28.869864 kubelet[2545]: E1106 05:54:28.869829 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.27.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:54:28.871183 kubelet[2545]: I1106 05:54:28.871131 2545 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 6 05:54:28.872199 kubelet[2545]: I1106 05:54:28.872177 2545 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 05:54:28.874839 kubelet[2545]: W1106 05:54:28.874813 2545 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 05:54:28.884139 kubelet[2545]: I1106 05:54:28.884102 2545 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 05:54:28.884393 kubelet[2545]: I1106 05:54:28.884373 2545 server.go:1289] "Started kubelet" Nov 6 05:54:28.890276 kubelet[2545]: I1106 05:54:28.889982 2545 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 05:54:28.898907 kubelet[2545]: I1106 05:54:28.898860 2545 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 05:54:28.902703 kubelet[2545]: I1106 05:54:28.902681 2545 server.go:317] "Adding debug handlers to kubelet server" Nov 6 05:54:28.920341 kubelet[2545]: I1106 05:54:28.920232 2545 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 05:54:28.920844 kubelet[2545]: I1106 05:54:28.920822 2545 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 05:54:28.922372 kubelet[2545]: I1106 05:54:28.922347 2545 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 05:54:28.925507 kubelet[2545]: I1106 05:54:28.925483 2545 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 05:54:28.926020 kubelet[2545]: E1106 05:54:28.925990 2545 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" Nov 6 05:54:28.930023 kubelet[2545]: E1106 05:54:28.891642 2545 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.27.98:6443/api/v1/namespaces/default/events\": dial tcp 10.230.27.98:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-dhf6q.gb1.brightbox.com.1875552a30fce8b8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-dhf6q.gb1.brightbox.com,UID:srv-dhf6q.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-dhf6q.gb1.brightbox.com,},FirstTimestamp:2025-11-06 05:54:28.884285624 +0000 UTC m=+0.603926524,LastTimestamp:2025-11-06 05:54:28.884285624 +0000 UTC m=+0.603926524,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-dhf6q.gb1.brightbox.com,}" Nov 6 05:54:28.933112 kubelet[2545]: I1106 05:54:28.933037 2545 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 05:54:28.935788 kubelet[2545]: I1106 05:54:28.934783 2545 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 05:54:28.935788 kubelet[2545]: I1106 05:54:28.934837 2545 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 05:54:28.935788 kubelet[2545]: I1106 05:54:28.934873 2545 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 05:54:28.935788 kubelet[2545]: I1106 05:54:28.934893 2545 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 05:54:28.935788 kubelet[2545]: E1106 05:54:28.934977 2545 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 05:54:28.935788 kubelet[2545]: I1106 05:54:28.935301 2545 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 05:54:28.935788 kubelet[2545]: I1106 05:54:28.935408 2545 reconciler.go:26] "Reconciler: start to sync state" Nov 6 05:54:28.937705 kubelet[2545]: E1106 05:54:28.937668 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.27.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dhf6q.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.27.98:6443: connect: connection refused" interval="200ms" Nov 6 05:54:28.938494 kubelet[2545]: E1106 05:54:28.938463 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.27.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:54:28.938926 kubelet[2545]: I1106 05:54:28.938896 2545 factory.go:223] Registration of the systemd container factory successfully Nov 6 05:54:28.939154 kubelet[2545]: I1106 05:54:28.939126 2545 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 05:54:28.942536 kubelet[2545]: E1106 05:54:28.941134 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.27.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:54:28.942536 kubelet[2545]: I1106 05:54:28.942320 2545 factory.go:223] Registration of the containerd container factory successfully Nov 6 05:54:28.947166 kubelet[2545]: E1106 05:54:28.946579 2545 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 05:54:28.975341 kubelet[2545]: I1106 05:54:28.975269 2545 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 05:54:28.975341 kubelet[2545]: I1106 05:54:28.975310 2545 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 05:54:28.975341 kubelet[2545]: I1106 05:54:28.975351 2545 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:54:28.977834 kubelet[2545]: I1106 05:54:28.977790 2545 policy_none.go:49] "None policy: Start" Nov 6 05:54:28.977932 kubelet[2545]: I1106 05:54:28.977840 2545 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 05:54:28.977932 kubelet[2545]: I1106 05:54:28.977868 2545 state_mem.go:35] "Initializing new in-memory state store" Nov 6 05:54:28.989213 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 05:54:29.004602 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 05:54:29.011267 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 05:54:29.020467 kubelet[2545]: E1106 05:54:29.020411 2545 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 05:54:29.021332 kubelet[2545]: I1106 05:54:29.021306 2545 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 05:54:29.021406 kubelet[2545]: I1106 05:54:29.021352 2545 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 05:54:29.021878 kubelet[2545]: I1106 05:54:29.021853 2545 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 05:54:29.025805 kubelet[2545]: E1106 05:54:29.025545 2545 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 05:54:29.025805 kubelet[2545]: E1106 05:54:29.025619 2545 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-dhf6q.gb1.brightbox.com\" not found" Nov 6 05:54:29.052024 systemd[1]: Created slice kubepods-burstable-pod39a70a20eb6d11b11e10727bf302fed4.slice - libcontainer container kubepods-burstable-pod39a70a20eb6d11b11e10727bf302fed4.slice. Nov 6 05:54:29.068816 kubelet[2545]: E1106 05:54:29.068463 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.071666 systemd[1]: Created slice kubepods-burstable-pod83ec7c6d3556c675106b1ca43339f70d.slice - libcontainer container kubepods-burstable-pod83ec7c6d3556c675106b1ca43339f70d.slice. Nov 6 05:54:29.085010 kubelet[2545]: E1106 05:54:29.084983 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.089277 systemd[1]: Created slice kubepods-burstable-podabcdc5f7f270bd6b98e47f616d415913.slice - libcontainer container kubepods-burstable-podabcdc5f7f270bd6b98e47f616d415913.slice. Nov 6 05:54:29.092633 kubelet[2545]: E1106 05:54:29.092577 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.124308 kubelet[2545]: I1106 05:54:29.124265 2545 kubelet_node_status.go:75] "Attempting to register node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.124863 kubelet[2545]: E1106 05:54:29.124820 2545 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.27.98:6443/api/v1/nodes\": dial tcp 10.230.27.98:6443: connect: connection refused" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.136508 kubelet[2545]: I1106 05:54:29.136403 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83ec7c6d3556c675106b1ca43339f70d-ca-certs\") pod \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" (UID: \"83ec7c6d3556c675106b1ca43339f70d\") " pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.136508 kubelet[2545]: I1106 05:54:29.136497 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83ec7c6d3556c675106b1ca43339f70d-k8s-certs\") pod \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" (UID: \"83ec7c6d3556c675106b1ca43339f70d\") " pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.136866 kubelet[2545]: I1106 05:54:29.136827 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39a70a20eb6d11b11e10727bf302fed4-kubeconfig\") pod \"kube-scheduler-srv-dhf6q.gb1.brightbox.com\" (UID: \"39a70a20eb6d11b11e10727bf302fed4\") " pod="kube-system/kube-scheduler-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.139149 kubelet[2545]: E1106 05:54:29.139093 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.27.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dhf6q.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.27.98:6443: connect: connection refused" interval="400ms" Nov 6 05:54:29.238707 kubelet[2545]: I1106 05:54:29.238273 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-flexvolume-dir\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.238707 kubelet[2545]: I1106 05:54:29.238366 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83ec7c6d3556c675106b1ca43339f70d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" (UID: \"83ec7c6d3556c675106b1ca43339f70d\") " pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.238707 kubelet[2545]: I1106 05:54:29.238482 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-ca-certs\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.238707 kubelet[2545]: I1106 05:54:29.238521 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-k8s-certs\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.238707 kubelet[2545]: I1106 05:54:29.238579 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-kubeconfig\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.239090 kubelet[2545]: I1106 05:54:29.238657 2545 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.328698 kubelet[2545]: I1106 05:54:29.328519 2545 kubelet_node_status.go:75] "Attempting to register node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.329845 kubelet[2545]: E1106 05:54:29.329780 2545 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.27.98:6443/api/v1/nodes\": dial tcp 10.230.27.98:6443: connect: connection refused" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.373164 containerd[1638]: time="2025-11-06T05:54:29.372999589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-dhf6q.gb1.brightbox.com,Uid:39a70a20eb6d11b11e10727bf302fed4,Namespace:kube-system,Attempt:0,}" Nov 6 05:54:29.387668 containerd[1638]: time="2025-11-06T05:54:29.387610684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-dhf6q.gb1.brightbox.com,Uid:83ec7c6d3556c675106b1ca43339f70d,Namespace:kube-system,Attempt:0,}" Nov 6 05:54:29.397534 containerd[1638]: time="2025-11-06T05:54:29.395769311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-dhf6q.gb1.brightbox.com,Uid:abcdc5f7f270bd6b98e47f616d415913,Namespace:kube-system,Attempt:0,}" Nov 6 05:54:29.514976 containerd[1638]: time="2025-11-06T05:54:29.514896698Z" level=info msg="connecting to shim f07ecec07d1f05964afb9685db23f2fab403ac16a16bb0144207f5abc72474cf" address="unix:///run/containerd/s/c8e4befafc85552cbe7abc78eff70071c8847b9dcbff912d36e7242181cd3551" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:54:29.515864 containerd[1638]: time="2025-11-06T05:54:29.515828193Z" level=info msg="connecting to shim 4feb82c7629fd1970713cc65d23d84e16608995c075696cd7a5c30f048dbced7" address="unix:///run/containerd/s/962809e18da944ec50462fdeb58081d2f545ec5bebc7224f1ea8b4e63042f19f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:54:29.518868 containerd[1638]: time="2025-11-06T05:54:29.518828498Z" level=info msg="connecting to shim 1b36fd083f38182f50b85cf108155c572b9ef9d835986ce8dbccc26e02942f49" address="unix:///run/containerd/s/7a63796d098fabf4d027bf6f2ac0597f58f3e25457c434df16420be7f3b19e4d" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:54:29.541663 kubelet[2545]: E1106 05:54:29.540195 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.27.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dhf6q.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.27.98:6443: connect: connection refused" interval="800ms" Nov 6 05:54:29.660561 systemd[1]: Started cri-containerd-1b36fd083f38182f50b85cf108155c572b9ef9d835986ce8dbccc26e02942f49.scope - libcontainer container 1b36fd083f38182f50b85cf108155c572b9ef9d835986ce8dbccc26e02942f49. Nov 6 05:54:29.664784 systemd[1]: Started cri-containerd-4feb82c7629fd1970713cc65d23d84e16608995c075696cd7a5c30f048dbced7.scope - libcontainer container 4feb82c7629fd1970713cc65d23d84e16608995c075696cd7a5c30f048dbced7. Nov 6 05:54:29.668079 systemd[1]: Started cri-containerd-f07ecec07d1f05964afb9685db23f2fab403ac16a16bb0144207f5abc72474cf.scope - libcontainer container f07ecec07d1f05964afb9685db23f2fab403ac16a16bb0144207f5abc72474cf. Nov 6 05:54:29.734425 kubelet[2545]: I1106 05:54:29.734363 2545 kubelet_node_status.go:75] "Attempting to register node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.734916 kubelet[2545]: E1106 05:54:29.734824 2545 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.27.98:6443/api/v1/nodes\": dial tcp 10.230.27.98:6443: connect: connection refused" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:29.801597 containerd[1638]: time="2025-11-06T05:54:29.801318958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-dhf6q.gb1.brightbox.com,Uid:83ec7c6d3556c675106b1ca43339f70d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b36fd083f38182f50b85cf108155c572b9ef9d835986ce8dbccc26e02942f49\"" Nov 6 05:54:29.812164 containerd[1638]: time="2025-11-06T05:54:29.811485016Z" level=info msg="CreateContainer within sandbox \"1b36fd083f38182f50b85cf108155c572b9ef9d835986ce8dbccc26e02942f49\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 05:54:29.813298 kubelet[2545]: E1106 05:54:29.813259 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.27.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 6 05:54:29.820034 containerd[1638]: time="2025-11-06T05:54:29.819953336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-dhf6q.gb1.brightbox.com,Uid:39a70a20eb6d11b11e10727bf302fed4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4feb82c7629fd1970713cc65d23d84e16608995c075696cd7a5c30f048dbced7\"" Nov 6 05:54:29.827663 containerd[1638]: time="2025-11-06T05:54:29.827403857Z" level=info msg="CreateContainer within sandbox \"4feb82c7629fd1970713cc65d23d84e16608995c075696cd7a5c30f048dbced7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 05:54:29.831008 containerd[1638]: time="2025-11-06T05:54:29.830976010Z" level=info msg="Container 6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:54:29.840763 containerd[1638]: time="2025-11-06T05:54:29.840724112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-dhf6q.gb1.brightbox.com,Uid:abcdc5f7f270bd6b98e47f616d415913,Namespace:kube-system,Attempt:0,} returns sandbox id \"f07ecec07d1f05964afb9685db23f2fab403ac16a16bb0144207f5abc72474cf\"" Nov 6 05:54:29.845720 containerd[1638]: time="2025-11-06T05:54:29.845358437Z" level=info msg="Container 26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:54:29.847787 containerd[1638]: time="2025-11-06T05:54:29.847752163Z" level=info msg="CreateContainer within sandbox \"f07ecec07d1f05964afb9685db23f2fab403ac16a16bb0144207f5abc72474cf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 05:54:29.848966 containerd[1638]: time="2025-11-06T05:54:29.848927330Z" level=info msg="CreateContainer within sandbox \"1b36fd083f38182f50b85cf108155c572b9ef9d835986ce8dbccc26e02942f49\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e\"" Nov 6 05:54:29.850260 containerd[1638]: time="2025-11-06T05:54:29.850051090Z" level=info msg="StartContainer for \"6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e\"" Nov 6 05:54:29.853790 containerd[1638]: time="2025-11-06T05:54:29.853114739Z" level=info msg="connecting to shim 6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e" address="unix:///run/containerd/s/7a63796d098fabf4d027bf6f2ac0597f58f3e25457c434df16420be7f3b19e4d" protocol=ttrpc version=3 Nov 6 05:54:29.856286 containerd[1638]: time="2025-11-06T05:54:29.856251627Z" level=info msg="CreateContainer within sandbox \"4feb82c7629fd1970713cc65d23d84e16608995c075696cd7a5c30f048dbced7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca\"" Nov 6 05:54:29.857829 containerd[1638]: time="2025-11-06T05:54:29.857793100Z" level=info msg="StartContainer for \"26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca\"" Nov 6 05:54:29.861797 containerd[1638]: time="2025-11-06T05:54:29.861037953Z" level=info msg="Container ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:54:29.861797 containerd[1638]: time="2025-11-06T05:54:29.861725216Z" level=info msg="connecting to shim 26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca" address="unix:///run/containerd/s/962809e18da944ec50462fdeb58081d2f545ec5bebc7224f1ea8b4e63042f19f" protocol=ttrpc version=3 Nov 6 05:54:29.872210 containerd[1638]: time="2025-11-06T05:54:29.872159679Z" level=info msg="CreateContainer within sandbox \"f07ecec07d1f05964afb9685db23f2fab403ac16a16bb0144207f5abc72474cf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8\"" Nov 6 05:54:29.874308 containerd[1638]: time="2025-11-06T05:54:29.874272617Z" level=info msg="StartContainer for \"ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8\"" Nov 6 05:54:29.876992 containerd[1638]: time="2025-11-06T05:54:29.876958216Z" level=info msg="connecting to shim ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8" address="unix:///run/containerd/s/c8e4befafc85552cbe7abc78eff70071c8847b9dcbff912d36e7242181cd3551" protocol=ttrpc version=3 Nov 6 05:54:29.897347 systemd[1]: Started cri-containerd-6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e.scope - libcontainer container 6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e. Nov 6 05:54:29.913363 systemd[1]: Started cri-containerd-26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca.scope - libcontainer container 26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca. Nov 6 05:54:29.930340 systemd[1]: Started cri-containerd-ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8.scope - libcontainer container ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8. Nov 6 05:54:30.073001 containerd[1638]: time="2025-11-06T05:54:30.072855536Z" level=info msg="StartContainer for \"6d903444742f267fa227419614cc968a44039fcc3698279fe7bd54ae305bf54e\" returns successfully" Nov 6 05:54:30.096625 containerd[1638]: time="2025-11-06T05:54:30.096472397Z" level=info msg="StartContainer for \"ebe46a7e8b237d461a46b60c0157ca3574c8e18dfd1df6035929c54afde192f8\" returns successfully" Nov 6 05:54:30.097873 containerd[1638]: time="2025-11-06T05:54:30.097821367Z" level=info msg="StartContainer for \"26a91bdd3b17507594605828a08dee7178b1d157248c27f411d375a31778e8ca\" returns successfully" Nov 6 05:54:30.196106 kubelet[2545]: E1106 05:54:30.195911 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.27.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 6 05:54:30.204847 kubelet[2545]: E1106 05:54:30.204794 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.27.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 6 05:54:30.342028 kubelet[2545]: E1106 05:54:30.341953 2545 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.27.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-dhf6q.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.27.98:6443: connect: connection refused" interval="1.6s" Nov 6 05:54:30.445272 kubelet[2545]: E1106 05:54:30.445208 2545 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.27.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-dhf6q.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.27.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 6 05:54:30.539582 kubelet[2545]: I1106 05:54:30.539460 2545 kubelet_node_status.go:75] "Attempting to register node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:30.540586 kubelet[2545]: E1106 05:54:30.540550 2545 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.27.98:6443/api/v1/nodes\": dial tcp 10.230.27.98:6443: connect: connection refused" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:30.985307 kubelet[2545]: E1106 05:54:30.985218 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:30.991656 kubelet[2545]: E1106 05:54:30.991631 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:30.995682 kubelet[2545]: E1106 05:54:30.995652 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:32.001045 kubelet[2545]: E1106 05:54:32.000993 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:32.001620 kubelet[2545]: E1106 05:54:32.001473 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:32.002668 kubelet[2545]: E1106 05:54:32.002641 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:32.145508 kubelet[2545]: I1106 05:54:32.145461 2545 kubelet_node_status.go:75] "Attempting to register node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.004572 kubelet[2545]: E1106 05:54:33.004524 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.005543 kubelet[2545]: E1106 05:54:33.004889 2545 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.228502 kubelet[2545]: E1106 05:54:33.228444 2545 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-dhf6q.gb1.brightbox.com\" not found" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.363372 kubelet[2545]: I1106 05:54:33.362578 2545 kubelet_node_status.go:78] "Successfully registered node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.431225 kubelet[2545]: I1106 05:54:33.431119 2545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.443254 kubelet[2545]: E1106 05:54:33.443218 2545 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-dhf6q.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.444100 kubelet[2545]: I1106 05:54:33.443429 2545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.445967 kubelet[2545]: E1106 05:54:33.445735 2545 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.445967 kubelet[2545]: I1106 05:54:33.445776 2545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.447987 kubelet[2545]: E1106 05:54:33.447959 2545 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:33.866926 kubelet[2545]: I1106 05:54:33.866856 2545 apiserver.go:52] "Watching apiserver" Nov 6 05:54:33.935877 kubelet[2545]: I1106 05:54:33.935816 2545 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 05:54:33.991218 kubelet[2545]: I1106 05:54:33.991037 2545 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:34.000582 kubelet[2545]: I1106 05:54:34.000528 2545 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 05:54:35.329312 systemd[1]: Reload requested from client PID 2823 ('systemctl') (unit session-11.scope)... Nov 6 05:54:35.329350 systemd[1]: Reloading... Nov 6 05:54:35.443259 zram_generator::config[2864]: No configuration found. Nov 6 05:54:35.856091 systemd[1]: Reloading finished in 526 ms. Nov 6 05:54:35.903267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:35.920113 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 05:54:35.920743 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:35.921122 systemd[1]: kubelet.service: Consumed 1.171s CPU time, 129.7M memory peak. Nov 6 05:54:35.925815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 05:54:36.222113 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 05:54:36.235678 (kubelet)[2932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 05:54:36.338772 kubelet[2932]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:54:36.338772 kubelet[2932]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 05:54:36.340657 kubelet[2932]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 05:54:36.340657 kubelet[2932]: I1106 05:54:36.339216 2932 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 05:54:36.350041 kubelet[2932]: I1106 05:54:36.350011 2932 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 6 05:54:36.350250 kubelet[2932]: I1106 05:54:36.350228 2932 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 05:54:36.350782 kubelet[2932]: I1106 05:54:36.350758 2932 server.go:956] "Client rotation is on, will bootstrap in background" Nov 6 05:54:36.353181 kubelet[2932]: I1106 05:54:36.353154 2932 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 6 05:54:36.357235 kubelet[2932]: I1106 05:54:36.357200 2932 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 05:54:36.392127 kubelet[2932]: I1106 05:54:36.392065 2932 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 6 05:54:36.401598 kubelet[2932]: I1106 05:54:36.401569 2932 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 05:54:36.402571 kubelet[2932]: I1106 05:54:36.402119 2932 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 05:54:36.402571 kubelet[2932]: I1106 05:54:36.402197 2932 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-dhf6q.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 05:54:36.402571 kubelet[2932]: I1106 05:54:36.402432 2932 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 05:54:36.402571 kubelet[2932]: I1106 05:54:36.402449 2932 container_manager_linux.go:303] "Creating device plugin manager" Nov 6 05:54:36.402571 kubelet[2932]: I1106 05:54:36.402519 2932 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:54:36.403315 kubelet[2932]: I1106 05:54:36.403271 2932 kubelet.go:480] "Attempting to sync node with API server" Nov 6 05:54:36.404017 kubelet[2932]: I1106 05:54:36.403295 2932 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 05:54:36.404017 kubelet[2932]: I1106 05:54:36.403896 2932 kubelet.go:386] "Adding apiserver pod source" Nov 6 05:54:36.404017 kubelet[2932]: I1106 05:54:36.403922 2932 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 05:54:36.409173 kubelet[2932]: I1106 05:54:36.408616 2932 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 6 05:54:36.409475 kubelet[2932]: I1106 05:54:36.409448 2932 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 6 05:54:36.426222 kubelet[2932]: I1106 05:54:36.424652 2932 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 05:54:36.426222 kubelet[2932]: I1106 05:54:36.424727 2932 server.go:1289] "Started kubelet" Nov 6 05:54:36.441348 kubelet[2932]: I1106 05:54:36.441274 2932 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 05:54:36.443997 kubelet[2932]: I1106 05:54:36.443974 2932 server.go:317] "Adding debug handlers to kubelet server" Nov 6 05:54:36.458234 kubelet[2932]: I1106 05:54:36.458179 2932 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 05:54:36.458922 kubelet[2932]: I1106 05:54:36.458852 2932 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 05:54:36.459435 kubelet[2932]: I1106 05:54:36.459410 2932 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 05:54:36.472475 kubelet[2932]: I1106 05:54:36.472448 2932 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 05:54:36.472794 kubelet[2932]: E1106 05:54:36.472764 2932 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-dhf6q.gb1.brightbox.com\" not found" Nov 6 05:54:36.473102 kubelet[2932]: I1106 05:54:36.473081 2932 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 05:54:36.473375 kubelet[2932]: I1106 05:54:36.473295 2932 reconciler.go:26] "Reconciler: start to sync state" Nov 6 05:54:36.474105 kubelet[2932]: I1106 05:54:36.474078 2932 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 05:54:36.479923 kubelet[2932]: E1106 05:54:36.479896 2932 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 05:54:36.484393 kubelet[2932]: I1106 05:54:36.484354 2932 factory.go:223] Registration of the systemd container factory successfully Nov 6 05:54:36.484566 kubelet[2932]: I1106 05:54:36.484525 2932 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 05:54:36.511174 kubelet[2932]: I1106 05:54:36.511062 2932 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 6 05:54:36.515951 kubelet[2932]: I1106 05:54:36.515670 2932 factory.go:223] Registration of the containerd container factory successfully Nov 6 05:54:36.517342 kubelet[2932]: I1106 05:54:36.517317 2932 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 6 05:54:36.517496 kubelet[2932]: I1106 05:54:36.517476 2932 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 6 05:54:36.517614 kubelet[2932]: I1106 05:54:36.517594 2932 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 05:54:36.517729 kubelet[2932]: I1106 05:54:36.517710 2932 kubelet.go:2436] "Starting kubelet main sync loop" Nov 6 05:54:36.517947 kubelet[2932]: E1106 05:54:36.517887 2932 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.610823 2932 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.610853 2932 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.610882 2932 state_mem.go:36] "Initialized new in-memory state store" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.611083 2932 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.611103 2932 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.611129 2932 policy_none.go:49] "None policy: Start" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.611163 2932 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.611182 2932 state_mem.go:35] "Initializing new in-memory state store" Nov 6 05:54:36.611712 kubelet[2932]: I1106 05:54:36.611342 2932 state_mem.go:75] "Updated machine memory state" Nov 6 05:54:36.618856 kubelet[2932]: E1106 05:54:36.618177 2932 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 05:54:36.620600 kubelet[2932]: E1106 05:54:36.620117 2932 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 6 05:54:36.620600 kubelet[2932]: I1106 05:54:36.620528 2932 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 05:54:36.620600 kubelet[2932]: I1106 05:54:36.620546 2932 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 05:54:36.621274 kubelet[2932]: I1106 05:54:36.621251 2932 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 05:54:36.627798 kubelet[2932]: E1106 05:54:36.626641 2932 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 05:54:36.747248 kubelet[2932]: I1106 05:54:36.747056 2932 kubelet_node_status.go:75] "Attempting to register node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.761630 kubelet[2932]: I1106 05:54:36.761549 2932 kubelet_node_status.go:124] "Node was previously registered" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.761833 kubelet[2932]: I1106 05:54:36.761652 2932 kubelet_node_status.go:78] "Successfully registered node" node="srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.820383 kubelet[2932]: I1106 05:54:36.819987 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.820632 kubelet[2932]: I1106 05:54:36.820578 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.820994 kubelet[2932]: I1106 05:54:36.820955 2932 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.832924 kubelet[2932]: I1106 05:54:36.832512 2932 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 05:54:36.834468 kubelet[2932]: I1106 05:54:36.834405 2932 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 05:54:36.836479 kubelet[2932]: I1106 05:54:36.836038 2932 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 6 05:54:36.836479 kubelet[2932]: E1106 05:54:36.836098 2932 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.875571 kubelet[2932]: I1106 05:54:36.874972 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-ca-certs\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.875571 kubelet[2932]: I1106 05:54:36.875061 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-flexvolume-dir\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.875571 kubelet[2932]: I1106 05:54:36.875096 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-k8s-certs\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.875571 kubelet[2932]: I1106 05:54:36.875124 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39a70a20eb6d11b11e10727bf302fed4-kubeconfig\") pod \"kube-scheduler-srv-dhf6q.gb1.brightbox.com\" (UID: \"39a70a20eb6d11b11e10727bf302fed4\") " pod="kube-system/kube-scheduler-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.875571 kubelet[2932]: I1106 05:54:36.875175 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/83ec7c6d3556c675106b1ca43339f70d-ca-certs\") pod \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" (UID: \"83ec7c6d3556c675106b1ca43339f70d\") " pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.876003 kubelet[2932]: I1106 05:54:36.875203 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/83ec7c6d3556c675106b1ca43339f70d-k8s-certs\") pod \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" (UID: \"83ec7c6d3556c675106b1ca43339f70d\") " pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.876003 kubelet[2932]: I1106 05:54:36.875232 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/83ec7c6d3556c675106b1ca43339f70d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-dhf6q.gb1.brightbox.com\" (UID: \"83ec7c6d3556c675106b1ca43339f70d\") " pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.876003 kubelet[2932]: I1106 05:54:36.875260 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-kubeconfig\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:36.876003 kubelet[2932]: I1106 05:54:36.875289 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abcdc5f7f270bd6b98e47f616d415913-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-dhf6q.gb1.brightbox.com\" (UID: \"abcdc5f7f270bd6b98e47f616d415913\") " pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" Nov 6 05:54:37.419835 kubelet[2932]: I1106 05:54:37.419648 2932 apiserver.go:52] "Watching apiserver" Nov 6 05:54:37.473317 kubelet[2932]: I1106 05:54:37.473223 2932 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 05:54:37.476155 kubelet[2932]: I1106 05:54:37.475277 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-dhf6q.gb1.brightbox.com" podStartSLOduration=1.475211994 podStartE2EDuration="1.475211994s" podCreationTimestamp="2025-11-06 05:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:54:37.474734752 +0000 UTC m=+1.221715585" watchObservedRunningTime="2025-11-06 05:54:37.475211994 +0000 UTC m=+1.222192801" Nov 6 05:54:37.511009 kubelet[2932]: I1106 05:54:37.510677 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-dhf6q.gb1.brightbox.com" podStartSLOduration=4.510653253 podStartE2EDuration="4.510653253s" podCreationTimestamp="2025-11-06 05:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:54:37.488529569 +0000 UTC m=+1.235510389" watchObservedRunningTime="2025-11-06 05:54:37.510653253 +0000 UTC m=+1.257634072" Nov 6 05:54:37.533448 kubelet[2932]: I1106 05:54:37.533368 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-dhf6q.gb1.brightbox.com" podStartSLOduration=1.5333459729999999 podStartE2EDuration="1.533345973s" podCreationTimestamp="2025-11-06 05:54:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:54:37.511739104 +0000 UTC m=+1.258719936" watchObservedRunningTime="2025-11-06 05:54:37.533345973 +0000 UTC m=+1.280326793" Nov 6 05:54:41.939867 kubelet[2932]: I1106 05:54:41.939790 2932 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 05:54:41.940615 containerd[1638]: time="2025-11-06T05:54:41.940300269Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 05:54:41.942350 kubelet[2932]: I1106 05:54:41.940739 2932 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 05:54:43.010505 systemd[1]: Created slice kubepods-besteffort-pod02bc12ff_c9b4_4218_8419_e58345986c2a.slice - libcontainer container kubepods-besteffort-pod02bc12ff_c9b4_4218_8419_e58345986c2a.slice. Nov 6 05:54:43.014261 kubelet[2932]: I1106 05:54:43.013721 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02bc12ff-c9b4-4218-8419-e58345986c2a-xtables-lock\") pod \"kube-proxy-ht8kf\" (UID: \"02bc12ff-c9b4-4218-8419-e58345986c2a\") " pod="kube-system/kube-proxy-ht8kf" Nov 6 05:54:43.014261 kubelet[2932]: I1106 05:54:43.013770 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02bc12ff-c9b4-4218-8419-e58345986c2a-lib-modules\") pod \"kube-proxy-ht8kf\" (UID: \"02bc12ff-c9b4-4218-8419-e58345986c2a\") " pod="kube-system/kube-proxy-ht8kf" Nov 6 05:54:43.014261 kubelet[2932]: I1106 05:54:43.013811 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02bc12ff-c9b4-4218-8419-e58345986c2a-kube-proxy\") pod \"kube-proxy-ht8kf\" (UID: \"02bc12ff-c9b4-4218-8419-e58345986c2a\") " pod="kube-system/kube-proxy-ht8kf" Nov 6 05:54:43.014261 kubelet[2932]: I1106 05:54:43.013843 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxntd\" (UniqueName: \"kubernetes.io/projected/02bc12ff-c9b4-4218-8419-e58345986c2a-kube-api-access-pxntd\") pod \"kube-proxy-ht8kf\" (UID: \"02bc12ff-c9b4-4218-8419-e58345986c2a\") " pod="kube-system/kube-proxy-ht8kf" Nov 6 05:54:43.162761 systemd[1]: Created slice kubepods-besteffort-pod55aecf68_ce87_4c24_ba92_d71bc97f3867.slice - libcontainer container kubepods-besteffort-pod55aecf68_ce87_4c24_ba92_d71bc97f3867.slice. Nov 6 05:54:43.215901 kubelet[2932]: I1106 05:54:43.215833 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/55aecf68-ce87-4c24-ba92-d71bc97f3867-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ld5vv\" (UID: \"55aecf68-ce87-4c24-ba92-d71bc97f3867\") " pod="tigera-operator/tigera-operator-7dcd859c48-ld5vv" Nov 6 05:54:43.216437 kubelet[2932]: I1106 05:54:43.216369 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8x97\" (UniqueName: \"kubernetes.io/projected/55aecf68-ce87-4c24-ba92-d71bc97f3867-kube-api-access-d8x97\") pod \"tigera-operator-7dcd859c48-ld5vv\" (UID: \"55aecf68-ce87-4c24-ba92-d71bc97f3867\") " pod="tigera-operator/tigera-operator-7dcd859c48-ld5vv" Nov 6 05:54:43.322390 containerd[1638]: time="2025-11-06T05:54:43.321994992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht8kf,Uid:02bc12ff-c9b4-4218-8419-e58345986c2a,Namespace:kube-system,Attempt:0,}" Nov 6 05:54:43.361169 containerd[1638]: time="2025-11-06T05:54:43.361016322Z" level=info msg="connecting to shim a23332524ad112ebcde2c271d5115649d6166975f5f20ac1edd3ba0215997e6d" address="unix:///run/containerd/s/88a0ca9857e9067bb39f7063e04cd63577b66baa82172e8468ca4e5ed776fe3c" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:54:43.407392 systemd[1]: Started cri-containerd-a23332524ad112ebcde2c271d5115649d6166975f5f20ac1edd3ba0215997e6d.scope - libcontainer container a23332524ad112ebcde2c271d5115649d6166975f5f20ac1edd3ba0215997e6d. Nov 6 05:54:43.452704 containerd[1638]: time="2025-11-06T05:54:43.452593898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ht8kf,Uid:02bc12ff-c9b4-4218-8419-e58345986c2a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23332524ad112ebcde2c271d5115649d6166975f5f20ac1edd3ba0215997e6d\"" Nov 6 05:54:43.460237 containerd[1638]: time="2025-11-06T05:54:43.460193592Z" level=info msg="CreateContainer within sandbox \"a23332524ad112ebcde2c271d5115649d6166975f5f20ac1edd3ba0215997e6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 05:54:43.470697 containerd[1638]: time="2025-11-06T05:54:43.470496937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ld5vv,Uid:55aecf68-ce87-4c24-ba92-d71bc97f3867,Namespace:tigera-operator,Attempt:0,}" Nov 6 05:54:43.479171 containerd[1638]: time="2025-11-06T05:54:43.478168364Z" level=info msg="Container d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:54:43.492219 containerd[1638]: time="2025-11-06T05:54:43.492178731Z" level=info msg="CreateContainer within sandbox \"a23332524ad112ebcde2c271d5115649d6166975f5f20ac1edd3ba0215997e6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc\"" Nov 6 05:54:43.493817 containerd[1638]: time="2025-11-06T05:54:43.493751457Z" level=info msg="StartContainer for \"d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc\"" Nov 6 05:54:43.497640 containerd[1638]: time="2025-11-06T05:54:43.497596614Z" level=info msg="connecting to shim d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc" address="unix:///run/containerd/s/88a0ca9857e9067bb39f7063e04cd63577b66baa82172e8468ca4e5ed776fe3c" protocol=ttrpc version=3 Nov 6 05:54:43.512496 containerd[1638]: time="2025-11-06T05:54:43.512423451Z" level=info msg="connecting to shim e03fc78d504f293de47e717ccfdec8a770e406d8469e65258c951c4e433b5599" address="unix:///run/containerd/s/5e71040f76d98586041981bb62cf6a34ef6b074e7665dea7a4d6d2589e0792d4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:54:43.539386 systemd[1]: Started cri-containerd-d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc.scope - libcontainer container d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc. Nov 6 05:54:43.571385 systemd[1]: Started cri-containerd-e03fc78d504f293de47e717ccfdec8a770e406d8469e65258c951c4e433b5599.scope - libcontainer container e03fc78d504f293de47e717ccfdec8a770e406d8469e65258c951c4e433b5599. Nov 6 05:54:43.650257 containerd[1638]: time="2025-11-06T05:54:43.650095912Z" level=info msg="StartContainer for \"d618932b2047d80d36823bc1db10cc4e4074ab1922d6d656f66af7f658c001cc\" returns successfully" Nov 6 05:54:43.677969 containerd[1638]: time="2025-11-06T05:54:43.677905407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ld5vv,Uid:55aecf68-ce87-4c24-ba92-d71bc97f3867,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e03fc78d504f293de47e717ccfdec8a770e406d8469e65258c951c4e433b5599\"" Nov 6 05:54:43.683426 containerd[1638]: time="2025-11-06T05:54:43.683359677Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 6 05:54:44.148897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount470388016.mount: Deactivated successfully. Nov 6 05:54:44.606159 kubelet[2932]: I1106 05:54:44.605946 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ht8kf" podStartSLOduration=2.604939989 podStartE2EDuration="2.604939989s" podCreationTimestamp="2025-11-06 05:54:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:54:44.604243487 +0000 UTC m=+8.351224321" watchObservedRunningTime="2025-11-06 05:54:44.604939989 +0000 UTC m=+8.351920808" Nov 6 05:54:45.238439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8762688.mount: Deactivated successfully. Nov 6 05:54:47.112863 containerd[1638]: time="2025-11-06T05:54:47.112769429Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:47.114720 containerd[1638]: time="2025-11-06T05:54:47.114653879Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 6 05:54:47.115428 containerd[1638]: time="2025-11-06T05:54:47.115382411Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:47.118739 containerd[1638]: time="2025-11-06T05:54:47.118654196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:54:47.120829 containerd[1638]: time="2025-11-06T05:54:47.120767223Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.437346432s" Nov 6 05:54:47.120829 containerd[1638]: time="2025-11-06T05:54:47.120817917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 6 05:54:47.127894 containerd[1638]: time="2025-11-06T05:54:47.127460930Z" level=info msg="CreateContainer within sandbox \"e03fc78d504f293de47e717ccfdec8a770e406d8469e65258c951c4e433b5599\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 6 05:54:47.141028 containerd[1638]: time="2025-11-06T05:54:47.140968895Z" level=info msg="Container 89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:54:47.150890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2932364340.mount: Deactivated successfully. Nov 6 05:54:47.201130 containerd[1638]: time="2025-11-06T05:54:47.201019298Z" level=info msg="CreateContainer within sandbox \"e03fc78d504f293de47e717ccfdec8a770e406d8469e65258c951c4e433b5599\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296\"" Nov 6 05:54:47.202468 containerd[1638]: time="2025-11-06T05:54:47.202424339Z" level=info msg="StartContainer for \"89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296\"" Nov 6 05:54:47.204010 containerd[1638]: time="2025-11-06T05:54:47.203968444Z" level=info msg="connecting to shim 89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296" address="unix:///run/containerd/s/5e71040f76d98586041981bb62cf6a34ef6b074e7665dea7a4d6d2589e0792d4" protocol=ttrpc version=3 Nov 6 05:54:47.233588 systemd[1]: Started cri-containerd-89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296.scope - libcontainer container 89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296. Nov 6 05:54:47.284677 containerd[1638]: time="2025-11-06T05:54:47.284612980Z" level=info msg="StartContainer for \"89da50e880c15c7ab19cb3b31657a4412bd0b782d5be676d610acf26d6df6296\" returns successfully" Nov 6 05:54:48.749113 kubelet[2932]: I1106 05:54:48.748942 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ld5vv" podStartSLOduration=2.3087066 podStartE2EDuration="5.748866054s" podCreationTimestamp="2025-11-06 05:54:43 +0000 UTC" firstStartedPulling="2025-11-06 05:54:43.682126801 +0000 UTC m=+7.429107613" lastFinishedPulling="2025-11-06 05:54:47.122286262 +0000 UTC m=+10.869267067" observedRunningTime="2025-11-06 05:54:47.619661532 +0000 UTC m=+11.366642358" watchObservedRunningTime="2025-11-06 05:54:48.748866054 +0000 UTC m=+12.495846874" Nov 6 05:54:54.911859 sudo[1936]: pam_unix(sudo:session): session closed for user root Nov 6 05:54:55.061553 sshd[1929]: Connection closed by 139.178.68.195 port 34500 Nov 6 05:54:55.064519 sshd-session[1917]: pam_unix(sshd:session): session closed for user core Nov 6 05:54:55.071997 systemd[1]: sshd@8-10.230.27.98:22-139.178.68.195:34500.service: Deactivated successfully. Nov 6 05:54:55.079947 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 05:54:55.080630 systemd[1]: session-11.scope: Consumed 7.910s CPU time, 153.9M memory peak. Nov 6 05:54:55.086241 systemd-logind[1611]: Session 11 logged out. Waiting for processes to exit. Nov 6 05:54:55.091900 systemd-logind[1611]: Removed session 11. Nov 6 05:55:01.836701 systemd[1]: Created slice kubepods-besteffort-podb80d90bb_0260_4af6_8eee_215894a61311.slice - libcontainer container kubepods-besteffort-podb80d90bb_0260_4af6_8eee_215894a61311.slice. Nov 6 05:55:01.851669 kubelet[2932]: I1106 05:55:01.850723 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b80d90bb-0260-4af6-8eee-215894a61311-tigera-ca-bundle\") pod \"calico-typha-5cbd56cb6c-kzpsr\" (UID: \"b80d90bb-0260-4af6-8eee-215894a61311\") " pod="calico-system/calico-typha-5cbd56cb6c-kzpsr" Nov 6 05:55:01.852379 kubelet[2932]: I1106 05:55:01.851690 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k46r6\" (UniqueName: \"kubernetes.io/projected/b80d90bb-0260-4af6-8eee-215894a61311-kube-api-access-k46r6\") pod \"calico-typha-5cbd56cb6c-kzpsr\" (UID: \"b80d90bb-0260-4af6-8eee-215894a61311\") " pod="calico-system/calico-typha-5cbd56cb6c-kzpsr" Nov 6 05:55:01.852379 kubelet[2932]: I1106 05:55:01.851766 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b80d90bb-0260-4af6-8eee-215894a61311-typha-certs\") pod \"calico-typha-5cbd56cb6c-kzpsr\" (UID: \"b80d90bb-0260-4af6-8eee-215894a61311\") " pod="calico-system/calico-typha-5cbd56cb6c-kzpsr" Nov 6 05:55:02.022632 systemd[1]: Created slice kubepods-besteffort-podbd3bc966_5f69_4cad_9beb_df61e0e89e6e.slice - libcontainer container kubepods-besteffort-podbd3bc966_5f69_4cad_9beb_df61e0e89e6e.slice. Nov 6 05:55:02.054065 kubelet[2932]: I1106 05:55:02.053270 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-var-lib-calico\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054065 kubelet[2932]: I1106 05:55:02.053338 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-cni-net-dir\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054065 kubelet[2932]: I1106 05:55:02.053370 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-tigera-ca-bundle\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054065 kubelet[2932]: I1106 05:55:02.053409 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-cni-log-dir\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054065 kubelet[2932]: I1106 05:55:02.053457 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-policysync\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054561 kubelet[2932]: I1106 05:55:02.053499 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-var-run-calico\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054561 kubelet[2932]: I1106 05:55:02.053541 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-flexvol-driver-host\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.054561 kubelet[2932]: I1106 05:55:02.053578 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-lib-modules\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.056028 kubelet[2932]: I1106 05:55:02.055100 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-node-certs\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.056028 kubelet[2932]: I1106 05:55:02.055188 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-xtables-lock\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.056028 kubelet[2932]: I1106 05:55:02.055246 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-cni-bin-dir\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.056028 kubelet[2932]: I1106 05:55:02.055275 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzcf4\" (UniqueName: \"kubernetes.io/projected/bd3bc966-5f69-4cad-9beb-df61e0e89e6e-kube-api-access-bzcf4\") pod \"calico-node-nbrr9\" (UID: \"bd3bc966-5f69-4cad-9beb-df61e0e89e6e\") " pod="calico-system/calico-node-nbrr9" Nov 6 05:55:02.102158 kubelet[2932]: E1106 05:55:02.101520 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:02.145870 containerd[1638]: time="2025-11-06T05:55:02.145710152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cbd56cb6c-kzpsr,Uid:b80d90bb-0260-4af6-8eee-215894a61311,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:02.157257 kubelet[2932]: I1106 05:55:02.156484 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b7c3e80-078e-47aa-8574-17ccfe24f839-varrun\") pod \"csi-node-driver-298gz\" (UID: \"9b7c3e80-078e-47aa-8574-17ccfe24f839\") " pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:02.157257 kubelet[2932]: I1106 05:55:02.156611 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b7c3e80-078e-47aa-8574-17ccfe24f839-socket-dir\") pod \"csi-node-driver-298gz\" (UID: \"9b7c3e80-078e-47aa-8574-17ccfe24f839\") " pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:02.157257 kubelet[2932]: I1106 05:55:02.156754 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flp8s\" (UniqueName: \"kubernetes.io/projected/9b7c3e80-078e-47aa-8574-17ccfe24f839-kube-api-access-flp8s\") pod \"csi-node-driver-298gz\" (UID: \"9b7c3e80-078e-47aa-8574-17ccfe24f839\") " pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:02.158364 kubelet[2932]: I1106 05:55:02.157924 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b7c3e80-078e-47aa-8574-17ccfe24f839-registration-dir\") pod \"csi-node-driver-298gz\" (UID: \"9b7c3e80-078e-47aa-8574-17ccfe24f839\") " pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:02.158364 kubelet[2932]: I1106 05:55:02.158064 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b7c3e80-078e-47aa-8574-17ccfe24f839-kubelet-dir\") pod \"csi-node-driver-298gz\" (UID: \"9b7c3e80-078e-47aa-8574-17ccfe24f839\") " pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:02.164498 kubelet[2932]: E1106 05:55:02.164270 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.164875 kubelet[2932]: W1106 05:55:02.164750 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.165216 kubelet[2932]: E1106 05:55:02.165080 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.166818 kubelet[2932]: E1106 05:55:02.166486 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.166818 kubelet[2932]: W1106 05:55:02.166613 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.166818 kubelet[2932]: E1106 05:55:02.166637 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.170161 kubelet[2932]: E1106 05:55:02.169897 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.170161 kubelet[2932]: W1106 05:55:02.169920 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.170161 kubelet[2932]: E1106 05:55:02.169938 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.171414 kubelet[2932]: E1106 05:55:02.171250 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.171414 kubelet[2932]: W1106 05:55:02.171271 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.171414 kubelet[2932]: E1106 05:55:02.171288 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.186092 kubelet[2932]: E1106 05:55:02.185966 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.186092 kubelet[2932]: W1106 05:55:02.186012 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.186092 kubelet[2932]: E1106 05:55:02.186042 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.220675 kubelet[2932]: E1106 05:55:02.220622 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.221334 kubelet[2932]: W1106 05:55:02.221304 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.221576 kubelet[2932]: E1106 05:55:02.221550 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.227696 containerd[1638]: time="2025-11-06T05:55:02.227347129Z" level=info msg="connecting to shim 4ca45080fa970c1610f92216609895623130b5fca318df7c9107e768169fcfe4" address="unix:///run/containerd/s/3b48eef67fed41a44fd06301c6387c349e2849c5367a4ccf9315bd3f947e114a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:02.260875 kubelet[2932]: E1106 05:55:02.260821 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.261238 kubelet[2932]: W1106 05:55:02.261165 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.261971 kubelet[2932]: E1106 05:55:02.261376 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.262529 kubelet[2932]: E1106 05:55:02.262504 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.263035 kubelet[2932]: W1106 05:55:02.262892 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.263035 kubelet[2932]: E1106 05:55:02.262920 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.264344 kubelet[2932]: E1106 05:55:02.264221 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.264811 kubelet[2932]: W1106 05:55:02.264572 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.264811 kubelet[2932]: E1106 05:55:02.264599 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.266010 kubelet[2932]: E1106 05:55:02.265961 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.266010 kubelet[2932]: W1106 05:55:02.265981 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.267225 kubelet[2932]: E1106 05:55:02.266164 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.267726 kubelet[2932]: E1106 05:55:02.267688 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.267971 kubelet[2932]: W1106 05:55:02.267840 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.267971 kubelet[2932]: E1106 05:55:02.267880 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.269159 kubelet[2932]: E1106 05:55:02.268798 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.269159 kubelet[2932]: W1106 05:55:02.269091 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.269159 kubelet[2932]: E1106 05:55:02.269112 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.270322 kubelet[2932]: E1106 05:55:02.270264 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.270963 kubelet[2932]: W1106 05:55:02.270285 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.270963 kubelet[2932]: E1106 05:55:02.270554 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.272152 kubelet[2932]: E1106 05:55:02.272096 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.272537 kubelet[2932]: W1106 05:55:02.272265 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.272537 kubelet[2932]: E1106 05:55:02.272292 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.273635 kubelet[2932]: E1106 05:55:02.273393 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.273635 kubelet[2932]: W1106 05:55:02.273413 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.273635 kubelet[2932]: E1106 05:55:02.273432 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.275310 kubelet[2932]: E1106 05:55:02.275288 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.275576 kubelet[2932]: W1106 05:55:02.275405 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.275576 kubelet[2932]: E1106 05:55:02.275432 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.276205 kubelet[2932]: E1106 05:55:02.276016 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.276205 kubelet[2932]: W1106 05:55:02.276093 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.276205 kubelet[2932]: E1106 05:55:02.276114 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.277695 kubelet[2932]: E1106 05:55:02.277243 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.277900 kubelet[2932]: W1106 05:55:02.277264 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.277900 kubelet[2932]: E1106 05:55:02.277823 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.279268 kubelet[2932]: E1106 05:55:02.279246 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.279501 kubelet[2932]: W1106 05:55:02.279386 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.279501 kubelet[2932]: E1106 05:55:02.279417 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.280582 kubelet[2932]: E1106 05:55:02.280503 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.280582 kubelet[2932]: W1106 05:55:02.280523 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.280582 kubelet[2932]: E1106 05:55:02.280551 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.282490 kubelet[2932]: E1106 05:55:02.282468 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.282749 kubelet[2932]: W1106 05:55:02.282613 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.282749 kubelet[2932]: E1106 05:55:02.282642 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.283612 kubelet[2932]: E1106 05:55:02.283249 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.283612 kubelet[2932]: W1106 05:55:02.283572 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.284283 kubelet[2932]: E1106 05:55:02.283593 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.285267 kubelet[2932]: E1106 05:55:02.284697 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.285620 kubelet[2932]: W1106 05:55:02.285560 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.285620 kubelet[2932]: E1106 05:55:02.285587 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.286973 kubelet[2932]: E1106 05:55:02.286941 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.287269 kubelet[2932]: W1106 05:55:02.287078 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.287269 kubelet[2932]: E1106 05:55:02.287105 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.288315 kubelet[2932]: E1106 05:55:02.288252 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.288315 kubelet[2932]: W1106 05:55:02.288272 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.288315 kubelet[2932]: E1106 05:55:02.288288 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.289470 kubelet[2932]: E1106 05:55:02.289424 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.289470 kubelet[2932]: W1106 05:55:02.289443 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.289708 kubelet[2932]: E1106 05:55:02.289598 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.290323 kubelet[2932]: E1106 05:55:02.290277 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.290323 kubelet[2932]: W1106 05:55:02.290297 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.290649 kubelet[2932]: E1106 05:55:02.290512 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.291222 kubelet[2932]: E1106 05:55:02.291017 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.291222 kubelet[2932]: W1106 05:55:02.291036 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.291222 kubelet[2932]: E1106 05:55:02.291065 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.292899 kubelet[2932]: E1106 05:55:02.292525 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.292899 kubelet[2932]: W1106 05:55:02.292543 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.292899 kubelet[2932]: E1106 05:55:02.292559 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.293918 kubelet[2932]: E1106 05:55:02.293856 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.293918 kubelet[2932]: W1106 05:55:02.293876 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.293918 kubelet[2932]: E1106 05:55:02.293895 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.294626 kubelet[2932]: E1106 05:55:02.294557 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.294626 kubelet[2932]: W1106 05:55:02.294575 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.294626 kubelet[2932]: E1106 05:55:02.294592 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.302571 systemd[1]: Started cri-containerd-4ca45080fa970c1610f92216609895623130b5fca318df7c9107e768169fcfe4.scope - libcontainer container 4ca45080fa970c1610f92216609895623130b5fca318df7c9107e768169fcfe4. Nov 6 05:55:02.321221 kubelet[2932]: E1106 05:55:02.319596 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:02.321221 kubelet[2932]: W1106 05:55:02.319635 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:02.321221 kubelet[2932]: E1106 05:55:02.319663 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:02.331842 containerd[1638]: time="2025-11-06T05:55:02.331621805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nbrr9,Uid:bd3bc966-5f69-4cad-9beb-df61e0e89e6e,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:02.374972 containerd[1638]: time="2025-11-06T05:55:02.373910707Z" level=info msg="connecting to shim 98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a" address="unix:///run/containerd/s/87e0ae288f371664ea5e37740f6f5c4f43757f2ef89565f7c55e54487c71a2ef" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:02.431414 systemd[1]: Started cri-containerd-98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a.scope - libcontainer container 98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a. Nov 6 05:55:02.602667 containerd[1638]: time="2025-11-06T05:55:02.602231378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cbd56cb6c-kzpsr,Uid:b80d90bb-0260-4af6-8eee-215894a61311,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ca45080fa970c1610f92216609895623130b5fca318df7c9107e768169fcfe4\"" Nov 6 05:55:02.606844 containerd[1638]: time="2025-11-06T05:55:02.606582603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nbrr9,Uid:bd3bc966-5f69-4cad-9beb-df61e0e89e6e,Namespace:calico-system,Attempt:0,} returns sandbox id \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\"" Nov 6 05:55:02.607825 containerd[1638]: time="2025-11-06T05:55:02.607553441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 6 05:55:03.518401 kubelet[2932]: E1106 05:55:03.518291 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:04.289369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227805462.mount: Deactivated successfully. Nov 6 05:55:05.518723 kubelet[2932]: E1106 05:55:05.518650 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:06.703858 containerd[1638]: time="2025-11-06T05:55:06.703767946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:06.706402 containerd[1638]: time="2025-11-06T05:55:06.706247138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 6 05:55:06.708297 containerd[1638]: time="2025-11-06T05:55:06.708249467Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:06.715543 containerd[1638]: time="2025-11-06T05:55:06.715470186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:06.717070 containerd[1638]: time="2025-11-06T05:55:06.716364446Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.108517141s" Nov 6 05:55:06.717070 containerd[1638]: time="2025-11-06T05:55:06.716423278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 6 05:55:06.718911 containerd[1638]: time="2025-11-06T05:55:06.718880188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 6 05:55:06.752449 containerd[1638]: time="2025-11-06T05:55:06.752375904Z" level=info msg="CreateContainer within sandbox \"4ca45080fa970c1610f92216609895623130b5fca318df7c9107e768169fcfe4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 6 05:55:06.765502 containerd[1638]: time="2025-11-06T05:55:06.765448960Z" level=info msg="Container 8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:55:06.780822 containerd[1638]: time="2025-11-06T05:55:06.780729784Z" level=info msg="CreateContainer within sandbox \"4ca45080fa970c1610f92216609895623130b5fca318df7c9107e768169fcfe4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772\"" Nov 6 05:55:06.783201 containerd[1638]: time="2025-11-06T05:55:06.781451954Z" level=info msg="StartContainer for \"8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772\"" Nov 6 05:55:06.784327 containerd[1638]: time="2025-11-06T05:55:06.784292518Z" level=info msg="connecting to shim 8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772" address="unix:///run/containerd/s/3b48eef67fed41a44fd06301c6387c349e2849c5367a4ccf9315bd3f947e114a" protocol=ttrpc version=3 Nov 6 05:55:06.860691 systemd[1]: Started cri-containerd-8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772.scope - libcontainer container 8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772. Nov 6 05:55:06.949545 containerd[1638]: time="2025-11-06T05:55:06.949482338Z" level=info msg="StartContainer for \"8fb9653bb44f7cf9f119962042363d472ed24d7f3cfcf3ed201db9edce103772\" returns successfully" Nov 6 05:55:07.518879 kubelet[2932]: E1106 05:55:07.518797 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:07.743432 kubelet[2932]: I1106 05:55:07.742282 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cbd56cb6c-kzpsr" podStartSLOduration=2.630386109 podStartE2EDuration="6.742261666s" podCreationTimestamp="2025-11-06 05:55:01 +0000 UTC" firstStartedPulling="2025-11-06 05:55:02.606222137 +0000 UTC m=+26.353202953" lastFinishedPulling="2025-11-06 05:55:06.718097698 +0000 UTC m=+30.465078510" observedRunningTime="2025-11-06 05:55:07.741942632 +0000 UTC m=+31.488923457" watchObservedRunningTime="2025-11-06 05:55:07.742261666 +0000 UTC m=+31.489242486" Nov 6 05:55:07.791555 kubelet[2932]: E1106 05:55:07.791351 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.792121 kubelet[2932]: W1106 05:55:07.791774 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.792121 kubelet[2932]: E1106 05:55:07.791824 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.792548 kubelet[2932]: E1106 05:55:07.792378 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.792548 kubelet[2932]: W1106 05:55:07.792398 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.792548 kubelet[2932]: E1106 05:55:07.792426 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.793235 kubelet[2932]: E1106 05:55:07.792838 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.793235 kubelet[2932]: W1106 05:55:07.792853 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.793235 kubelet[2932]: E1106 05:55:07.792868 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.793712 kubelet[2932]: E1106 05:55:07.793538 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.793712 kubelet[2932]: W1106 05:55:07.793557 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.793712 kubelet[2932]: E1106 05:55:07.793573 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.794153 kubelet[2932]: E1106 05:55:07.794006 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.794153 kubelet[2932]: W1106 05:55:07.794026 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.794153 kubelet[2932]: E1106 05:55:07.794045 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.794552 kubelet[2932]: E1106 05:55:07.794531 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.794815 kubelet[2932]: W1106 05:55:07.794654 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.794815 kubelet[2932]: E1106 05:55:07.794681 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.795034 kubelet[2932]: E1106 05:55:07.795014 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.795168 kubelet[2932]: W1106 05:55:07.795124 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.795404 kubelet[2932]: E1106 05:55:07.795258 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.795586 kubelet[2932]: E1106 05:55:07.795567 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.795699 kubelet[2932]: W1106 05:55:07.795678 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.795950 kubelet[2932]: E1106 05:55:07.795785 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.796107 kubelet[2932]: E1106 05:55:07.796087 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.796256 kubelet[2932]: W1106 05:55:07.796236 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.796366 kubelet[2932]: E1106 05:55:07.796346 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.797096 kubelet[2932]: E1106 05:55:07.797076 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.797096 kubelet[2932]: W1106 05:55:07.797156 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.797096 kubelet[2932]: E1106 05:55:07.797178 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.797934 kubelet[2932]: E1106 05:55:07.797914 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.798225 kubelet[2932]: W1106 05:55:07.798042 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.798225 kubelet[2932]: E1106 05:55:07.798067 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.798949 kubelet[2932]: E1106 05:55:07.798899 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.799278 kubelet[2932]: W1106 05:55:07.798922 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.799278 kubelet[2932]: E1106 05:55:07.799059 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.799662 kubelet[2932]: E1106 05:55:07.799605 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.799662 kubelet[2932]: W1106 05:55:07.799624 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.800226 kubelet[2932]: E1106 05:55:07.799865 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.800670 kubelet[2932]: E1106 05:55:07.800599 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.800670 kubelet[2932]: W1106 05:55:07.800619 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.800670 kubelet[2932]: E1106 05:55:07.800634 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.801572 kubelet[2932]: E1106 05:55:07.801456 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.801572 kubelet[2932]: W1106 05:55:07.801493 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.801572 kubelet[2932]: E1106 05:55:07.801516 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.828707 kubelet[2932]: E1106 05:55:07.828656 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.828707 kubelet[2932]: W1106 05:55:07.828686 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.828707 kubelet[2932]: E1106 05:55:07.828710 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.829018 kubelet[2932]: E1106 05:55:07.828984 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.829018 kubelet[2932]: W1106 05:55:07.829005 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.829125 kubelet[2932]: E1106 05:55:07.829021 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.829381 kubelet[2932]: E1106 05:55:07.829356 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.829381 kubelet[2932]: W1106 05:55:07.829376 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.829519 kubelet[2932]: E1106 05:55:07.829391 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.829741 kubelet[2932]: E1106 05:55:07.829718 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.829741 kubelet[2932]: W1106 05:55:07.829738 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.829862 kubelet[2932]: E1106 05:55:07.829754 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.830076 kubelet[2932]: E1106 05:55:07.830055 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.830076 kubelet[2932]: W1106 05:55:07.830074 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.830198 kubelet[2932]: E1106 05:55:07.830090 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.830376 kubelet[2932]: E1106 05:55:07.830354 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.830376 kubelet[2932]: W1106 05:55:07.830373 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.830526 kubelet[2932]: E1106 05:55:07.830388 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.830709 kubelet[2932]: E1106 05:55:07.830677 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.830709 kubelet[2932]: W1106 05:55:07.830697 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.830813 kubelet[2932]: E1106 05:55:07.830712 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.831041 kubelet[2932]: E1106 05:55:07.831018 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.831041 kubelet[2932]: W1106 05:55:07.831037 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.831238 kubelet[2932]: E1106 05:55:07.831073 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.831460 kubelet[2932]: E1106 05:55:07.831436 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.831460 kubelet[2932]: W1106 05:55:07.831457 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.831576 kubelet[2932]: E1106 05:55:07.831472 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.832059 kubelet[2932]: E1106 05:55:07.832023 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.832059 kubelet[2932]: W1106 05:55:07.832049 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.832241 kubelet[2932]: E1106 05:55:07.832065 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.832381 kubelet[2932]: E1106 05:55:07.832359 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.832381 kubelet[2932]: W1106 05:55:07.832379 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.832501 kubelet[2932]: E1106 05:55:07.832395 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.832672 kubelet[2932]: E1106 05:55:07.832650 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.832672 kubelet[2932]: W1106 05:55:07.832669 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.832798 kubelet[2932]: E1106 05:55:07.832685 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.833521 kubelet[2932]: E1106 05:55:07.833497 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.833521 kubelet[2932]: W1106 05:55:07.833518 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.833649 kubelet[2932]: E1106 05:55:07.833536 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.833797 kubelet[2932]: E1106 05:55:07.833777 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.833797 kubelet[2932]: W1106 05:55:07.833796 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.833943 kubelet[2932]: E1106 05:55:07.833811 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.834178 kubelet[2932]: E1106 05:55:07.834092 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.834178 kubelet[2932]: W1106 05:55:07.834111 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.834293 kubelet[2932]: E1106 05:55:07.834208 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.834545 kubelet[2932]: E1106 05:55:07.834524 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.834545 kubelet[2932]: W1106 05:55:07.834543 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.834656 kubelet[2932]: E1106 05:55:07.834558 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.834812 kubelet[2932]: E1106 05:55:07.834793 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.834812 kubelet[2932]: W1106 05:55:07.834812 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.834903 kubelet[2932]: E1106 05:55:07.834829 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:07.835276 kubelet[2932]: E1106 05:55:07.835255 2932 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 6 05:55:07.835276 kubelet[2932]: W1106 05:55:07.835274 2932 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 6 05:55:07.835448 kubelet[2932]: E1106 05:55:07.835289 2932 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 6 05:55:08.322167 containerd[1638]: time="2025-11-06T05:55:08.321007267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:08.322982 containerd[1638]: time="2025-11-06T05:55:08.322945934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:08.323849 containerd[1638]: time="2025-11-06T05:55:08.323813201Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:08.326284 containerd[1638]: time="2025-11-06T05:55:08.326251630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:08.327502 containerd[1638]: time="2025-11-06T05:55:08.327397124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.608307649s" Nov 6 05:55:08.327502 containerd[1638]: time="2025-11-06T05:55:08.327456031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 6 05:55:08.334204 containerd[1638]: time="2025-11-06T05:55:08.334164514Z" level=info msg="CreateContainer within sandbox \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 6 05:55:08.377451 containerd[1638]: time="2025-11-06T05:55:08.377377587Z" level=info msg="Container 10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:55:08.388261 containerd[1638]: time="2025-11-06T05:55:08.388206603Z" level=info msg="CreateContainer within sandbox \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713\"" Nov 6 05:55:08.390170 containerd[1638]: time="2025-11-06T05:55:08.389552985Z" level=info msg="StartContainer for \"10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713\"" Nov 6 05:55:08.392751 containerd[1638]: time="2025-11-06T05:55:08.392707506Z" level=info msg="connecting to shim 10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713" address="unix:///run/containerd/s/87e0ae288f371664ea5e37740f6f5c4f43757f2ef89565f7c55e54487c71a2ef" protocol=ttrpc version=3 Nov 6 05:55:08.429377 systemd[1]: Started cri-containerd-10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713.scope - libcontainer container 10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713. Nov 6 05:55:08.544907 containerd[1638]: time="2025-11-06T05:55:08.544822815Z" level=info msg="StartContainer for \"10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713\" returns successfully" Nov 6 05:55:08.568479 systemd[1]: cri-containerd-10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713.scope: Deactivated successfully. Nov 6 05:55:08.603014 containerd[1638]: time="2025-11-06T05:55:08.602932738Z" level=info msg="received exit event container_id:\"10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713\" id:\"10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713\" pid:3580 exited_at:{seconds:1762408508 nanos:574986533}" Nov 6 05:55:08.639754 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ffc019f4eeefa1a9ab2739c531468c58813f013f2c7724ff2b2b5aed0b9713-rootfs.mount: Deactivated successfully. Nov 6 05:55:08.738815 kubelet[2932]: I1106 05:55:08.738656 2932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 05:55:09.519347 kubelet[2932]: E1106 05:55:09.518895 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:09.741202 containerd[1638]: time="2025-11-06T05:55:09.740327063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 6 05:55:11.519160 kubelet[2932]: E1106 05:55:11.519073 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:13.520271 kubelet[2932]: E1106 05:55:13.518817 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:15.519266 kubelet[2932]: E1106 05:55:15.519171 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:15.841234 containerd[1638]: time="2025-11-06T05:55:15.840980728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:15.843184 containerd[1638]: time="2025-11-06T05:55:15.842952380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 6 05:55:15.845163 containerd[1638]: time="2025-11-06T05:55:15.845096238Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:15.847837 containerd[1638]: time="2025-11-06T05:55:15.847773356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:15.849520 containerd[1638]: time="2025-11-06T05:55:15.849198539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.107908954s" Nov 6 05:55:15.849520 containerd[1638]: time="2025-11-06T05:55:15.849243979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 6 05:55:15.856181 containerd[1638]: time="2025-11-06T05:55:15.855955924Z" level=info msg="CreateContainer within sandbox \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 6 05:55:15.870757 containerd[1638]: time="2025-11-06T05:55:15.870395848Z" level=info msg="Container 51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:55:15.881862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2973475536.mount: Deactivated successfully. Nov 6 05:55:15.896337 containerd[1638]: time="2025-11-06T05:55:15.896030080Z" level=info msg="CreateContainer within sandbox \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648\"" Nov 6 05:55:15.899925 containerd[1638]: time="2025-11-06T05:55:15.897588685Z" level=info msg="StartContainer for \"51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648\"" Nov 6 05:55:15.901290 containerd[1638]: time="2025-11-06T05:55:15.901237491Z" level=info msg="connecting to shim 51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648" address="unix:///run/containerd/s/87e0ae288f371664ea5e37740f6f5c4f43757f2ef89565f7c55e54487c71a2ef" protocol=ttrpc version=3 Nov 6 05:55:15.949501 systemd[1]: Started cri-containerd-51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648.scope - libcontainer container 51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648. Nov 6 05:55:16.033013 containerd[1638]: time="2025-11-06T05:55:16.032949241Z" level=info msg="StartContainer for \"51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648\" returns successfully" Nov 6 05:55:17.282627 systemd[1]: cri-containerd-51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648.scope: Deactivated successfully. Nov 6 05:55:17.285298 systemd[1]: cri-containerd-51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648.scope: Consumed 798ms CPU time, 162.3M memory peak, 9.5M read from disk, 171.3M written to disk. Nov 6 05:55:17.290941 containerd[1638]: time="2025-11-06T05:55:17.290472480Z" level=info msg="received exit event container_id:\"51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648\" id:\"51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648\" pid:3643 exited_at:{seconds:1762408517 nanos:289923358}" Nov 6 05:55:17.323731 kubelet[2932]: I1106 05:55:17.323616 2932 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 05:55:17.340831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ddba168f8d74ed48feae1addfbc222e9c5add709a9efeb3150b95b8ec6f648-rootfs.mount: Deactivated successfully. Nov 6 05:55:17.523860 systemd[1]: Created slice kubepods-burstable-pod395b50d4_a01a_4208_bbc6_1cef03cb976b.slice - libcontainer container kubepods-burstable-pod395b50d4_a01a_4208_bbc6_1cef03cb976b.slice. Nov 6 05:55:17.529452 kubelet[2932]: I1106 05:55:17.528475 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9hmw\" (UniqueName: \"kubernetes.io/projected/da766196-4491-43f8-a7f4-97b6b2ff4f0a-kube-api-access-b9hmw\") pod \"calico-apiserver-8cd67c9c6-zmk4q\" (UID: \"da766196-4491-43f8-a7f4-97b6b2ff4f0a\") " pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" Nov 6 05:55:17.529452 kubelet[2932]: I1106 05:55:17.528555 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/395b50d4-a01a-4208-bbc6-1cef03cb976b-config-volume\") pod \"coredns-674b8bbfcf-26ttf\" (UID: \"395b50d4-a01a-4208-bbc6-1cef03cb976b\") " pod="kube-system/coredns-674b8bbfcf-26ttf" Nov 6 05:55:17.529452 kubelet[2932]: I1106 05:55:17.528595 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ed9c04f-448d-4add-ac51-b62b66fb2d29-config-volume\") pod \"coredns-674b8bbfcf-pbnqx\" (UID: \"7ed9c04f-448d-4add-ac51-b62b66fb2d29\") " pod="kube-system/coredns-674b8bbfcf-pbnqx" Nov 6 05:55:17.529452 kubelet[2932]: I1106 05:55:17.528624 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpf8s\" (UniqueName: \"kubernetes.io/projected/7ed9c04f-448d-4add-ac51-b62b66fb2d29-kube-api-access-dpf8s\") pod \"coredns-674b8bbfcf-pbnqx\" (UID: \"7ed9c04f-448d-4add-ac51-b62b66fb2d29\") " pod="kube-system/coredns-674b8bbfcf-pbnqx" Nov 6 05:55:17.529452 kubelet[2932]: I1106 05:55:17.528668 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pjl7\" (UniqueName: \"kubernetes.io/projected/395b50d4-a01a-4208-bbc6-1cef03cb976b-kube-api-access-7pjl7\") pod \"coredns-674b8bbfcf-26ttf\" (UID: \"395b50d4-a01a-4208-bbc6-1cef03cb976b\") " pod="kube-system/coredns-674b8bbfcf-26ttf" Nov 6 05:55:17.529981 kubelet[2932]: I1106 05:55:17.528700 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/da766196-4491-43f8-a7f4-97b6b2ff4f0a-calico-apiserver-certs\") pod \"calico-apiserver-8cd67c9c6-zmk4q\" (UID: \"da766196-4491-43f8-a7f4-97b6b2ff4f0a\") " pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" Nov 6 05:55:17.540073 systemd[1]: Created slice kubepods-burstable-pod7ed9c04f_448d_4add_ac51_b62b66fb2d29.slice - libcontainer container kubepods-burstable-pod7ed9c04f_448d_4add_ac51_b62b66fb2d29.slice. Nov 6 05:55:17.551502 systemd[1]: Created slice kubepods-besteffort-podda766196_4491_43f8_a7f4_97b6b2ff4f0a.slice - libcontainer container kubepods-besteffort-podda766196_4491_43f8_a7f4_97b6b2ff4f0a.slice. Nov 6 05:55:17.569803 systemd[1]: Created slice kubepods-besteffort-pod5513e9ee_51f3_4098_ad93_0cffcda4f037.slice - libcontainer container kubepods-besteffort-pod5513e9ee_51f3_4098_ad93_0cffcda4f037.slice. Nov 6 05:55:17.584765 systemd[1]: Created slice kubepods-besteffort-podbbb1a0ea_189c_4a8a_9af6_93e27581e04c.slice - libcontainer container kubepods-besteffort-podbbb1a0ea_189c_4a8a_9af6_93e27581e04c.slice. Nov 6 05:55:17.602501 systemd[1]: Created slice kubepods-besteffort-pod6eba05f7_0eaa_45c3_8192_73bb69abd3a6.slice - libcontainer container kubepods-besteffort-pod6eba05f7_0eaa_45c3_8192_73bb69abd3a6.slice. Nov 6 05:55:17.613867 systemd[1]: Created slice kubepods-besteffort-pod9a7a74a3_52cd_4b77_ac72_984211b63b0e.slice - libcontainer container kubepods-besteffort-pod9a7a74a3_52cd_4b77_ac72_984211b63b0e.slice. Nov 6 05:55:17.627748 systemd[1]: Created slice kubepods-besteffort-pod9b7c3e80_078e_47aa_8574_17ccfe24f839.slice - libcontainer container kubepods-besteffort-pod9b7c3e80_078e_47aa_8574_17ccfe24f839.slice. Nov 6 05:55:17.631845 kubelet[2932]: I1106 05:55:17.629472 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a7a74a3-52cd-4b77-ac72-984211b63b0e-config\") pod \"goldmane-666569f655-4vklp\" (UID: \"9a7a74a3-52cd-4b77-ac72-984211b63b0e\") " pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:17.631845 kubelet[2932]: I1106 05:55:17.629554 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a7a74a3-52cd-4b77-ac72-984211b63b0e-goldmane-ca-bundle\") pod \"goldmane-666569f655-4vklp\" (UID: \"9a7a74a3-52cd-4b77-ac72-984211b63b0e\") " pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:17.631845 kubelet[2932]: I1106 05:55:17.629594 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68xk5\" (UniqueName: \"kubernetes.io/projected/9a7a74a3-52cd-4b77-ac72-984211b63b0e-kube-api-access-68xk5\") pod \"goldmane-666569f655-4vklp\" (UID: \"9a7a74a3-52cd-4b77-ac72-984211b63b0e\") " pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:17.631845 kubelet[2932]: I1106 05:55:17.629666 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-backend-key-pair\") pod \"whisker-57d47bffb8-bml8h\" (UID: \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\") " pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:17.631845 kubelet[2932]: I1106 05:55:17.629777 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47cr6\" (UniqueName: \"kubernetes.io/projected/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-kube-api-access-47cr6\") pod \"whisker-57d47bffb8-bml8h\" (UID: \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\") " pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:17.633325 kubelet[2932]: I1106 05:55:17.629813 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6eba05f7-0eaa-45c3-8192-73bb69abd3a6-calico-apiserver-certs\") pod \"calico-apiserver-8cd67c9c6-hw8f5\" (UID: \"6eba05f7-0eaa-45c3-8192-73bb69abd3a6\") " pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" Nov 6 05:55:17.633325 kubelet[2932]: I1106 05:55:17.629855 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltmb\" (UniqueName: \"kubernetes.io/projected/6eba05f7-0eaa-45c3-8192-73bb69abd3a6-kube-api-access-xltmb\") pod \"calico-apiserver-8cd67c9c6-hw8f5\" (UID: \"6eba05f7-0eaa-45c3-8192-73bb69abd3a6\") " pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" Nov 6 05:55:17.633325 kubelet[2932]: I1106 05:55:17.629898 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4wc\" (UniqueName: \"kubernetes.io/projected/5513e9ee-51f3-4098-ad93-0cffcda4f037-kube-api-access-9w4wc\") pod \"calico-kube-controllers-859c8f8bcd-gw95r\" (UID: \"5513e9ee-51f3-4098-ad93-0cffcda4f037\") " pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" Nov 6 05:55:17.633325 kubelet[2932]: I1106 05:55:17.629936 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-ca-bundle\") pod \"whisker-57d47bffb8-bml8h\" (UID: \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\") " pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:17.633325 kubelet[2932]: I1106 05:55:17.630002 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9a7a74a3-52cd-4b77-ac72-984211b63b0e-goldmane-key-pair\") pod \"goldmane-666569f655-4vklp\" (UID: \"9a7a74a3-52cd-4b77-ac72-984211b63b0e\") " pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:17.633547 kubelet[2932]: I1106 05:55:17.630036 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5513e9ee-51f3-4098-ad93-0cffcda4f037-tigera-ca-bundle\") pod \"calico-kube-controllers-859c8f8bcd-gw95r\" (UID: \"5513e9ee-51f3-4098-ad93-0cffcda4f037\") " pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" Nov 6 05:55:17.642827 containerd[1638]: time="2025-11-06T05:55:17.642690944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-298gz,Uid:9b7c3e80-078e-47aa-8574-17ccfe24f839,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:17.818598 containerd[1638]: time="2025-11-06T05:55:17.815960827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 6 05:55:17.833968 containerd[1638]: time="2025-11-06T05:55:17.833415748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26ttf,Uid:395b50d4-a01a-4208-bbc6-1cef03cb976b,Namespace:kube-system,Attempt:0,}" Nov 6 05:55:17.857267 containerd[1638]: time="2025-11-06T05:55:17.857186855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbnqx,Uid:7ed9c04f-448d-4add-ac51-b62b66fb2d29,Namespace:kube-system,Attempt:0,}" Nov 6 05:55:17.868652 containerd[1638]: time="2025-11-06T05:55:17.868597743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-zmk4q,Uid:da766196-4491-43f8-a7f4-97b6b2ff4f0a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:55:17.886516 containerd[1638]: time="2025-11-06T05:55:17.886301491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859c8f8bcd-gw95r,Uid:5513e9ee-51f3-4098-ad93-0cffcda4f037,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:17.898036 containerd[1638]: time="2025-11-06T05:55:17.897692588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d47bffb8-bml8h,Uid:bbb1a0ea-189c-4a8a-9af6-93e27581e04c,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:17.922791 containerd[1638]: time="2025-11-06T05:55:17.922363541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-hw8f5,Uid:6eba05f7-0eaa-45c3-8192-73bb69abd3a6,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:55:17.923593 containerd[1638]: time="2025-11-06T05:55:17.923559891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vklp,Uid:9a7a74a3-52cd-4b77-ac72-984211b63b0e,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:18.244266 containerd[1638]: time="2025-11-06T05:55:18.244191957Z" level=error msg="Failed to destroy network for sandbox \"0c211ac3091e7a775a1e1b60c7249f725e1d6c0ec3d469ddeb939afc349d4b05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.265753 containerd[1638]: time="2025-11-06T05:55:18.265679188Z" level=error msg="Failed to destroy network for sandbox \"6c79ca2fbdabfece558cdcd694939aaaf42a258fd39b24042fb3770fd7317cd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.271285 containerd[1638]: time="2025-11-06T05:55:18.271180948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-298gz,Uid:9b7c3e80-078e-47aa-8574-17ccfe24f839,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c211ac3091e7a775a1e1b60c7249f725e1d6c0ec3d469ddeb939afc349d4b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.281406 kubelet[2932]: E1106 05:55:18.281193 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c211ac3091e7a775a1e1b60c7249f725e1d6c0ec3d469ddeb939afc349d4b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.281406 kubelet[2932]: E1106 05:55:18.281332 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c211ac3091e7a775a1e1b60c7249f725e1d6c0ec3d469ddeb939afc349d4b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:18.281406 kubelet[2932]: E1106 05:55:18.281387 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c211ac3091e7a775a1e1b60c7249f725e1d6c0ec3d469ddeb939afc349d4b05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:18.281799 containerd[1638]: time="2025-11-06T05:55:18.281217729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbnqx,Uid:7ed9c04f-448d-4add-ac51-b62b66fb2d29,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c79ca2fbdabfece558cdcd694939aaaf42a258fd39b24042fb3770fd7317cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.282341 kubelet[2932]: E1106 05:55:18.282257 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c79ca2fbdabfece558cdcd694939aaaf42a258fd39b24042fb3770fd7317cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.283803 kubelet[2932]: E1106 05:55:18.282342 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c79ca2fbdabfece558cdcd694939aaaf42a258fd39b24042fb3770fd7317cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbnqx" Nov 6 05:55:18.283803 kubelet[2932]: E1106 05:55:18.282373 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c79ca2fbdabfece558cdcd694939aaaf42a258fd39b24042fb3770fd7317cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbnqx" Nov 6 05:55:18.292362 kubelet[2932]: E1106 05:55:18.289927 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c211ac3091e7a775a1e1b60c7249f725e1d6c0ec3d469ddeb939afc349d4b05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:18.292923 kubelet[2932]: E1106 05:55:18.292119 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pbnqx_kube-system(7ed9c04f-448d-4add-ac51-b62b66fb2d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pbnqx_kube-system(7ed9c04f-448d-4add-ac51-b62b66fb2d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c79ca2fbdabfece558cdcd694939aaaf42a258fd39b24042fb3770fd7317cd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pbnqx" podUID="7ed9c04f-448d-4add-ac51-b62b66fb2d29" Nov 6 05:55:18.363373 containerd[1638]: time="2025-11-06T05:55:18.363299011Z" level=error msg="Failed to destroy network for sandbox \"692d8916c3d9d471875042c2fa0760385a5732c3d2d1691fa06dd527bae2d22f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.370115 systemd[1]: run-netns-cni\x2d35edc12e\x2df13d\x2d3bb4\x2d358f\x2d70db2c30f737.mount: Deactivated successfully. Nov 6 05:55:18.379720 containerd[1638]: time="2025-11-06T05:55:18.379635552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859c8f8bcd-gw95r,Uid:5513e9ee-51f3-4098-ad93-0cffcda4f037,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"692d8916c3d9d471875042c2fa0760385a5732c3d2d1691fa06dd527bae2d22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.384600 systemd[1]: run-netns-cni\x2daf475443\x2d5b77\x2db465\x2da42a\x2d8a9760960866.mount: Deactivated successfully. Nov 6 05:55:18.388752 kubelet[2932]: E1106 05:55:18.386501 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692d8916c3d9d471875042c2fa0760385a5732c3d2d1691fa06dd527bae2d22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.388752 kubelet[2932]: E1106 05:55:18.386608 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692d8916c3d9d471875042c2fa0760385a5732c3d2d1691fa06dd527bae2d22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" Nov 6 05:55:18.391385 containerd[1638]: time="2025-11-06T05:55:18.389217270Z" level=error msg="Failed to destroy network for sandbox \"26d418ae08ab4ccdb6d53819b773d1925529e6f23bbdb216ba1498ce29a6191f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.391385 containerd[1638]: time="2025-11-06T05:55:18.389680073Z" level=error msg="Failed to destroy network for sandbox \"af8c737c5444575c0ecf8dd51049a0c86bd048eb3044dc1c692ef853f5a76daa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.401177 kubelet[2932]: E1106 05:55:18.393233 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"692d8916c3d9d471875042c2fa0760385a5732c3d2d1691fa06dd527bae2d22f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" Nov 6 05:55:18.401177 kubelet[2932]: E1106 05:55:18.400250 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"692d8916c3d9d471875042c2fa0760385a5732c3d2d1691fa06dd527bae2d22f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:55:18.398124 systemd[1]: run-netns-cni\x2d437b98ef\x2dca45\x2d7783\x2d6b15\x2dfc604c435e3c.mount: Deactivated successfully. Nov 6 05:55:18.398310 systemd[1]: run-netns-cni\x2dfa775f70\x2d8b4a\x2db62c\x2d4002\x2d7f648790597d.mount: Deactivated successfully. Nov 6 05:55:18.403195 containerd[1638]: time="2025-11-06T05:55:18.402269136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-zmk4q,Uid:da766196-4491-43f8-a7f4-97b6b2ff4f0a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d418ae08ab4ccdb6d53819b773d1925529e6f23bbdb216ba1498ce29a6191f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.403308 kubelet[2932]: E1106 05:55:18.402815 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d418ae08ab4ccdb6d53819b773d1925529e6f23bbdb216ba1498ce29a6191f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.403308 kubelet[2932]: E1106 05:55:18.402901 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d418ae08ab4ccdb6d53819b773d1925529e6f23bbdb216ba1498ce29a6191f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" Nov 6 05:55:18.403308 kubelet[2932]: E1106 05:55:18.403085 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26d418ae08ab4ccdb6d53819b773d1925529e6f23bbdb216ba1498ce29a6191f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" Nov 6 05:55:18.403951 kubelet[2932]: E1106 05:55:18.403345 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26d418ae08ab4ccdb6d53819b773d1925529e6f23bbdb216ba1498ce29a6191f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:55:18.415851 containerd[1638]: time="2025-11-06T05:55:18.410878740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26ttf,Uid:395b50d4-a01a-4208-bbc6-1cef03cb976b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8c737c5444575c0ecf8dd51049a0c86bd048eb3044dc1c692ef853f5a76daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.415851 containerd[1638]: time="2025-11-06T05:55:18.410889299Z" level=error msg="Failed to destroy network for sandbox \"45e55939a7eb706462bd1d8bc7ea2112b06735611ea0f80b984b448352e8fd93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.418932 containerd[1638]: time="2025-11-06T05:55:18.418618596Z" level=error msg="Failed to destroy network for sandbox \"698b90cb3c6e261b4e76bfad9d7a69733c49638a5473ab857ad8731fe8bb2ac2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.420039 kubelet[2932]: E1106 05:55:18.418789 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8c737c5444575c0ecf8dd51049a0c86bd048eb3044dc1c692ef853f5a76daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.420039 kubelet[2932]: E1106 05:55:18.419096 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8c737c5444575c0ecf8dd51049a0c86bd048eb3044dc1c692ef853f5a76daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-26ttf" Nov 6 05:55:18.420039 kubelet[2932]: E1106 05:55:18.419169 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8c737c5444575c0ecf8dd51049a0c86bd048eb3044dc1c692ef853f5a76daa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-26ttf" Nov 6 05:55:18.419725 systemd[1]: run-netns-cni\x2d9449e271\x2db6af\x2d0bae\x2d16fd\x2df44a7475a3c5.mount: Deactivated successfully. Nov 6 05:55:18.420397 kubelet[2932]: E1106 05:55:18.419312 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-26ttf_kube-system(395b50d4-a01a-4208-bbc6-1cef03cb976b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-26ttf_kube-system(395b50d4-a01a-4208-bbc6-1cef03cb976b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af8c737c5444575c0ecf8dd51049a0c86bd048eb3044dc1c692ef853f5a76daa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-26ttf" podUID="395b50d4-a01a-4208-bbc6-1cef03cb976b" Nov 6 05:55:18.429175 containerd[1638]: time="2025-11-06T05:55:18.427088288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d47bffb8-bml8h,Uid:bbb1a0ea-189c-4a8a-9af6-93e27581e04c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e55939a7eb706462bd1d8bc7ea2112b06735611ea0f80b984b448352e8fd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.429708 kubelet[2932]: E1106 05:55:18.427562 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e55939a7eb706462bd1d8bc7ea2112b06735611ea0f80b984b448352e8fd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.429708 kubelet[2932]: E1106 05:55:18.427645 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e55939a7eb706462bd1d8bc7ea2112b06735611ea0f80b984b448352e8fd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:18.429708 kubelet[2932]: E1106 05:55:18.427703 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45e55939a7eb706462bd1d8bc7ea2112b06735611ea0f80b984b448352e8fd93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:18.429865 kubelet[2932]: E1106 05:55:18.427790 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d47bffb8-bml8h_calico-system(bbb1a0ea-189c-4a8a-9af6-93e27581e04c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d47bffb8-bml8h_calico-system(bbb1a0ea-189c-4a8a-9af6-93e27581e04c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45e55939a7eb706462bd1d8bc7ea2112b06735611ea0f80b984b448352e8fd93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d47bffb8-bml8h" podUID="bbb1a0ea-189c-4a8a-9af6-93e27581e04c" Nov 6 05:55:18.432890 systemd[1]: run-netns-cni\x2dacb0105a\x2d8b07\x2d7fa9\x2d98e1\x2dbe16e213dbca.mount: Deactivated successfully. Nov 6 05:55:18.437286 containerd[1638]: time="2025-11-06T05:55:18.437199739Z" level=error msg="Failed to destroy network for sandbox \"2330c6836956e8601b9b4aecfe274bb2bc25871c5ae4de1cfcb25596eaaf23f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.437966 containerd[1638]: time="2025-11-06T05:55:18.437333766Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vklp,Uid:9a7a74a3-52cd-4b77-ac72-984211b63b0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"698b90cb3c6e261b4e76bfad9d7a69733c49638a5473ab857ad8731fe8bb2ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.439181 kubelet[2932]: E1106 05:55:18.438601 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"698b90cb3c6e261b4e76bfad9d7a69733c49638a5473ab857ad8731fe8bb2ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.439181 kubelet[2932]: E1106 05:55:18.438702 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"698b90cb3c6e261b4e76bfad9d7a69733c49638a5473ab857ad8731fe8bb2ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:18.439181 kubelet[2932]: E1106 05:55:18.438740 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"698b90cb3c6e261b4e76bfad9d7a69733c49638a5473ab857ad8731fe8bb2ac2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:18.440455 kubelet[2932]: E1106 05:55:18.438823 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"698b90cb3c6e261b4e76bfad9d7a69733c49638a5473ab857ad8731fe8bb2ac2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:55:18.440987 containerd[1638]: time="2025-11-06T05:55:18.440937410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-hw8f5,Uid:6eba05f7-0eaa-45c3-8192-73bb69abd3a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2330c6836956e8601b9b4aecfe274bb2bc25871c5ae4de1cfcb25596eaaf23f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.441668 kubelet[2932]: E1106 05:55:18.441602 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2330c6836956e8601b9b4aecfe274bb2bc25871c5ae4de1cfcb25596eaaf23f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:18.441816 kubelet[2932]: E1106 05:55:18.441789 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2330c6836956e8601b9b4aecfe274bb2bc25871c5ae4de1cfcb25596eaaf23f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" Nov 6 05:55:18.441966 kubelet[2932]: E1106 05:55:18.441937 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2330c6836956e8601b9b4aecfe274bb2bc25871c5ae4de1cfcb25596eaaf23f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" Nov 6 05:55:18.442214 kubelet[2932]: E1106 05:55:18.442156 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2330c6836956e8601b9b4aecfe274bb2bc25871c5ae4de1cfcb25596eaaf23f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:55:19.338130 systemd[1]: run-netns-cni\x2d5fc5b4d9\x2d80b1\x2da2b7\x2d97e9\x2dfacf63f351bd.mount: Deactivated successfully. Nov 6 05:55:24.425985 kubelet[2932]: I1106 05:55:24.425928 2932 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 6 05:55:29.520174 containerd[1638]: time="2025-11-06T05:55:29.520042600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-zmk4q,Uid:da766196-4491-43f8-a7f4-97b6b2ff4f0a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:55:29.523621 containerd[1638]: time="2025-11-06T05:55:29.523520271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d47bffb8-bml8h,Uid:bbb1a0ea-189c-4a8a-9af6-93e27581e04c,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:29.742070 containerd[1638]: time="2025-11-06T05:55:29.741659094Z" level=error msg="Failed to destroy network for sandbox \"093517a7795672f35610e3b265bb67f098fb01bd967600f39d417fed168892f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:29.746886 systemd[1]: run-netns-cni\x2d97fb5b55\x2d0091\x2d1b03\x2dbcc8\x2d0c7a7b12a70b.mount: Deactivated successfully. Nov 6 05:55:29.751057 containerd[1638]: time="2025-11-06T05:55:29.750795658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57d47bffb8-bml8h,Uid:bbb1a0ea-189c-4a8a-9af6-93e27581e04c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"093517a7795672f35610e3b265bb67f098fb01bd967600f39d417fed168892f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:29.752291 kubelet[2932]: E1106 05:55:29.751435 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"093517a7795672f35610e3b265bb67f098fb01bd967600f39d417fed168892f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:29.752291 kubelet[2932]: E1106 05:55:29.751533 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"093517a7795672f35610e3b265bb67f098fb01bd967600f39d417fed168892f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:29.752291 kubelet[2932]: E1106 05:55:29.751575 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"093517a7795672f35610e3b265bb67f098fb01bd967600f39d417fed168892f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57d47bffb8-bml8h" Nov 6 05:55:29.752855 kubelet[2932]: E1106 05:55:29.751676 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57d47bffb8-bml8h_calico-system(bbb1a0ea-189c-4a8a-9af6-93e27581e04c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57d47bffb8-bml8h_calico-system(bbb1a0ea-189c-4a8a-9af6-93e27581e04c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"093517a7795672f35610e3b265bb67f098fb01bd967600f39d417fed168892f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57d47bffb8-bml8h" podUID="bbb1a0ea-189c-4a8a-9af6-93e27581e04c" Nov 6 05:55:29.764158 containerd[1638]: time="2025-11-06T05:55:29.764041796Z" level=error msg="Failed to destroy network for sandbox \"799ebaf9ad52722bebaba9b45279fb32a07b863e4b33728a6faa834010049054\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:29.769700 systemd[1]: run-netns-cni\x2d1657a417\x2d106d\x2d41ab\x2d1998\x2de2785d6a08ec.mount: Deactivated successfully. Nov 6 05:55:29.772340 containerd[1638]: time="2025-11-06T05:55:29.772197957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-zmk4q,Uid:da766196-4491-43f8-a7f4-97b6b2ff4f0a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ebaf9ad52722bebaba9b45279fb32a07b863e4b33728a6faa834010049054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:29.773216 kubelet[2932]: E1106 05:55:29.772520 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ebaf9ad52722bebaba9b45279fb32a07b863e4b33728a6faa834010049054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:29.773216 kubelet[2932]: E1106 05:55:29.772615 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ebaf9ad52722bebaba9b45279fb32a07b863e4b33728a6faa834010049054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" Nov 6 05:55:29.773216 kubelet[2932]: E1106 05:55:29.772652 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ebaf9ad52722bebaba9b45279fb32a07b863e4b33728a6faa834010049054\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" Nov 6 05:55:29.773383 kubelet[2932]: E1106 05:55:29.772739 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"799ebaf9ad52722bebaba9b45279fb32a07b863e4b33728a6faa834010049054\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:55:30.527550 containerd[1638]: time="2025-11-06T05:55:30.527486007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26ttf,Uid:395b50d4-a01a-4208-bbc6-1cef03cb976b,Namespace:kube-system,Attempt:0,}" Nov 6 05:55:30.531335 containerd[1638]: time="2025-11-06T05:55:30.531226539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vklp,Uid:9a7a74a3-52cd-4b77-ac72-984211b63b0e,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:30.742758 containerd[1638]: time="2025-11-06T05:55:30.742682261Z" level=error msg="Failed to destroy network for sandbox \"51435ee3630e962a03fe8e012e5b31776af8f500dd826fba9b95121dc5207b2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:30.747254 containerd[1638]: time="2025-11-06T05:55:30.747049564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vklp,Uid:9a7a74a3-52cd-4b77-ac72-984211b63b0e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51435ee3630e962a03fe8e012e5b31776af8f500dd826fba9b95121dc5207b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:30.749365 kubelet[2932]: E1106 05:55:30.748504 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51435ee3630e962a03fe8e012e5b31776af8f500dd826fba9b95121dc5207b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:30.749365 kubelet[2932]: E1106 05:55:30.748595 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51435ee3630e962a03fe8e012e5b31776af8f500dd826fba9b95121dc5207b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:30.749365 kubelet[2932]: E1106 05:55:30.748649 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51435ee3630e962a03fe8e012e5b31776af8f500dd826fba9b95121dc5207b2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-4vklp" Nov 6 05:55:30.750103 kubelet[2932]: E1106 05:55:30.748751 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51435ee3630e962a03fe8e012e5b31776af8f500dd826fba9b95121dc5207b2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:55:30.752050 systemd[1]: run-netns-cni\x2d76b675ad\x2d39e6\x2d9533\x2de099\x2d5538a5819777.mount: Deactivated successfully. Nov 6 05:55:30.766157 containerd[1638]: time="2025-11-06T05:55:30.765967260Z" level=error msg="Failed to destroy network for sandbox \"846f4174706fb79dc8b866338397c32bd87faae45d3d3110429dab6c34d696ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:30.769348 containerd[1638]: time="2025-11-06T05:55:30.768815226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26ttf,Uid:395b50d4-a01a-4208-bbc6-1cef03cb976b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"846f4174706fb79dc8b866338397c32bd87faae45d3d3110429dab6c34d696ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:30.769926 kubelet[2932]: E1106 05:55:30.769848 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846f4174706fb79dc8b866338397c32bd87faae45d3d3110429dab6c34d696ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:30.772265 kubelet[2932]: E1106 05:55:30.769963 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846f4174706fb79dc8b866338397c32bd87faae45d3d3110429dab6c34d696ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-26ttf" Nov 6 05:55:30.772455 kubelet[2932]: E1106 05:55:30.772266 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846f4174706fb79dc8b866338397c32bd87faae45d3d3110429dab6c34d696ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-26ttf" Nov 6 05:55:30.773312 kubelet[2932]: E1106 05:55:30.772688 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-26ttf_kube-system(395b50d4-a01a-4208-bbc6-1cef03cb976b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-26ttf_kube-system(395b50d4-a01a-4208-bbc6-1cef03cb976b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"846f4174706fb79dc8b866338397c32bd87faae45d3d3110429dab6c34d696ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-26ttf" podUID="395b50d4-a01a-4208-bbc6-1cef03cb976b" Nov 6 05:55:30.773644 systemd[1]: run-netns-cni\x2d09883044\x2d4499\x2db87e\x2d9313\x2dad7ad7597d28.mount: Deactivated successfully. Nov 6 05:55:31.530369 containerd[1638]: time="2025-11-06T05:55:31.530300962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbnqx,Uid:7ed9c04f-448d-4add-ac51-b62b66fb2d29,Namespace:kube-system,Attempt:0,}" Nov 6 05:55:31.767645 containerd[1638]: time="2025-11-06T05:55:31.767423678Z" level=error msg="Failed to destroy network for sandbox \"f9a989efebff520b9d180e47dc104de50e725ee6b58a97f1aac4c9554934ede5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:31.777913 systemd[1]: run-netns-cni\x2d9b72c311\x2d201c\x2dc068\x2d3995\x2dca9aa3275ab2.mount: Deactivated successfully. Nov 6 05:55:31.789674 containerd[1638]: time="2025-11-06T05:55:31.789337312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbnqx,Uid:7ed9c04f-448d-4add-ac51-b62b66fb2d29,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a989efebff520b9d180e47dc104de50e725ee6b58a97f1aac4c9554934ede5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:31.793107 kubelet[2932]: E1106 05:55:31.792308 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a989efebff520b9d180e47dc104de50e725ee6b58a97f1aac4c9554934ede5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:31.793107 kubelet[2932]: E1106 05:55:31.792401 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a989efebff520b9d180e47dc104de50e725ee6b58a97f1aac4c9554934ede5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbnqx" Nov 6 05:55:31.793107 kubelet[2932]: E1106 05:55:31.792446 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9a989efebff520b9d180e47dc104de50e725ee6b58a97f1aac4c9554934ede5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pbnqx" Nov 6 05:55:31.793678 kubelet[2932]: E1106 05:55:31.792545 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pbnqx_kube-system(7ed9c04f-448d-4add-ac51-b62b66fb2d29)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pbnqx_kube-system(7ed9c04f-448d-4add-ac51-b62b66fb2d29)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9a989efebff520b9d180e47dc104de50e725ee6b58a97f1aac4c9554934ede5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pbnqx" podUID="7ed9c04f-448d-4add-ac51-b62b66fb2d29" Nov 6 05:55:32.523897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289195067.mount: Deactivated successfully. Nov 6 05:55:32.555702 containerd[1638]: time="2025-11-06T05:55:32.555454398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859c8f8bcd-gw95r,Uid:5513e9ee-51f3-4098-ad93-0cffcda4f037,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:32.627165 containerd[1638]: time="2025-11-06T05:55:32.626630371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 6 05:55:32.628930 containerd[1638]: time="2025-11-06T05:55:32.626807482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:32.658242 containerd[1638]: time="2025-11-06T05:55:32.658104495Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:32.660116 containerd[1638]: time="2025-11-06T05:55:32.659109169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 14.840886782s" Nov 6 05:55:32.660367 containerd[1638]: time="2025-11-06T05:55:32.660145374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 6 05:55:32.660367 containerd[1638]: time="2025-11-06T05:55:32.659913029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 05:55:32.749423 containerd[1638]: time="2025-11-06T05:55:32.749303142Z" level=info msg="CreateContainer within sandbox \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 6 05:55:32.754459 containerd[1638]: time="2025-11-06T05:55:32.754402891Z" level=error msg="Failed to destroy network for sandbox \"1a1e21fcdd9ac3151d86c8c25708541f787ae5f99f35c39efbc00cfb644dbea2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:32.757549 containerd[1638]: time="2025-11-06T05:55:32.757066425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859c8f8bcd-gw95r,Uid:5513e9ee-51f3-4098-ad93-0cffcda4f037,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1e21fcdd9ac3151d86c8c25708541f787ae5f99f35c39efbc00cfb644dbea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:32.759213 kubelet[2932]: E1106 05:55:32.757396 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1e21fcdd9ac3151d86c8c25708541f787ae5f99f35c39efbc00cfb644dbea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:32.759213 kubelet[2932]: E1106 05:55:32.757485 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1e21fcdd9ac3151d86c8c25708541f787ae5f99f35c39efbc00cfb644dbea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" Nov 6 05:55:32.759213 kubelet[2932]: E1106 05:55:32.757520 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1e21fcdd9ac3151d86c8c25708541f787ae5f99f35c39efbc00cfb644dbea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" Nov 6 05:55:32.759480 kubelet[2932]: E1106 05:55:32.757608 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a1e21fcdd9ac3151d86c8c25708541f787ae5f99f35c39efbc00cfb644dbea2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:55:32.772042 systemd[1]: run-netns-cni\x2d6fd6f99a\x2dcd86\x2d192e\x2d4e66\x2dd1cb2eb78aa8.mount: Deactivated successfully. Nov 6 05:55:32.796747 containerd[1638]: time="2025-11-06T05:55:32.796357651Z" level=info msg="Container cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:55:32.796873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1097574591.mount: Deactivated successfully. Nov 6 05:55:32.816634 containerd[1638]: time="2025-11-06T05:55:32.816522148Z" level=info msg="CreateContainer within sandbox \"98eb0d010d275981f5d7f147d45c56be34c670068fcc9cfcea07b62331886d4a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa\"" Nov 6 05:55:32.818772 containerd[1638]: time="2025-11-06T05:55:32.818297105Z" level=info msg="StartContainer for \"cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa\"" Nov 6 05:55:32.837290 containerd[1638]: time="2025-11-06T05:55:32.837059279Z" level=info msg="connecting to shim cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa" address="unix:///run/containerd/s/87e0ae288f371664ea5e37740f6f5c4f43757f2ef89565f7c55e54487c71a2ef" protocol=ttrpc version=3 Nov 6 05:55:33.003413 systemd[1]: Started cri-containerd-cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa.scope - libcontainer container cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa. Nov 6 05:55:33.109249 containerd[1638]: time="2025-11-06T05:55:33.109192262Z" level=info msg="StartContainer for \"cee99b557ac5ca3321956dde8b6a551dcb13bfb1dd9ce1c047edcc45efdd97fa\" returns successfully" Nov 6 05:55:33.441617 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 6 05:55:33.461748 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 6 05:55:33.526166 containerd[1638]: time="2025-11-06T05:55:33.525695046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-298gz,Uid:9b7c3e80-078e-47aa-8574-17ccfe24f839,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:33.528761 containerd[1638]: time="2025-11-06T05:55:33.528725366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-hw8f5,Uid:6eba05f7-0eaa-45c3-8192-73bb69abd3a6,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:55:33.752513 containerd[1638]: time="2025-11-06T05:55:33.752378253Z" level=error msg="Failed to destroy network for sandbox \"8d07370d63c48b2abd601f8b223084cfac25e25ea41ae052ed98c8f9a66cb205\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:33.758856 containerd[1638]: time="2025-11-06T05:55:33.758715514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-298gz,Uid:9b7c3e80-078e-47aa-8574-17ccfe24f839,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d07370d63c48b2abd601f8b223084cfac25e25ea41ae052ed98c8f9a66cb205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:33.764217 kubelet[2932]: E1106 05:55:33.759870 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d07370d63c48b2abd601f8b223084cfac25e25ea41ae052ed98c8f9a66cb205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:33.773350 kubelet[2932]: E1106 05:55:33.763972 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d07370d63c48b2abd601f8b223084cfac25e25ea41ae052ed98c8f9a66cb205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:33.775681 kubelet[2932]: E1106 05:55:33.775192 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d07370d63c48b2abd601f8b223084cfac25e25ea41ae052ed98c8f9a66cb205\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-298gz" Nov 6 05:55:33.775681 kubelet[2932]: E1106 05:55:33.775386 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d07370d63c48b2abd601f8b223084cfac25e25ea41ae052ed98c8f9a66cb205\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:33.776615 systemd[1]: run-netns-cni\x2d4169ac85\x2d052c\x2d9089\x2d524c\x2d4b8c88e30800.mount: Deactivated successfully. Nov 6 05:55:34.005192 kubelet[2932]: I1106 05:55:34.005036 2932 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47cr6\" (UniqueName: \"kubernetes.io/projected/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-kube-api-access-47cr6\") pod \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\" (UID: \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\") " Nov 6 05:55:34.005192 kubelet[2932]: I1106 05:55:34.005105 2932 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-ca-bundle\") pod \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\" (UID: \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\") " Nov 6 05:55:34.006011 kubelet[2932]: I1106 05:55:34.005552 2932 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-backend-key-pair\") pod \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\" (UID: \"bbb1a0ea-189c-4a8a-9af6-93e27581e04c\") " Nov 6 05:55:34.035307 kubelet[2932]: I1106 05:55:34.024388 2932 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bbb1a0ea-189c-4a8a-9af6-93e27581e04c" (UID: "bbb1a0ea-189c-4a8a-9af6-93e27581e04c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 05:55:34.043225 systemd[1]: var-lib-kubelet-pods-bbb1a0ea\x2d189c\x2d4a8a\x2d9af6\x2d93e27581e04c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47cr6.mount: Deactivated successfully. Nov 6 05:55:34.049111 systemd[1]: var-lib-kubelet-pods-bbb1a0ea\x2d189c\x2d4a8a\x2d9af6\x2d93e27581e04c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 6 05:55:34.058381 kubelet[2932]: I1106 05:55:34.057104 2932 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-kube-api-access-47cr6" (OuterVolumeSpecName: "kube-api-access-47cr6") pod "bbb1a0ea-189c-4a8a-9af6-93e27581e04c" (UID: "bbb1a0ea-189c-4a8a-9af6-93e27581e04c"). InnerVolumeSpecName "kube-api-access-47cr6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 05:55:34.058655 kubelet[2932]: I1106 05:55:34.058610 2932 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bbb1a0ea-189c-4a8a-9af6-93e27581e04c" (UID: "bbb1a0ea-189c-4a8a-9af6-93e27581e04c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 05:55:34.074528 kubelet[2932]: I1106 05:55:34.067289 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nbrr9" podStartSLOduration=3.011852694 podStartE2EDuration="33.067241829s" podCreationTimestamp="2025-11-06 05:55:01 +0000 UTC" firstStartedPulling="2025-11-06 05:55:02.610416906 +0000 UTC m=+26.357397718" lastFinishedPulling="2025-11-06 05:55:32.665806033 +0000 UTC m=+56.412786853" observedRunningTime="2025-11-06 05:55:34.059486097 +0000 UTC m=+57.806466922" watchObservedRunningTime="2025-11-06 05:55:34.067241829 +0000 UTC m=+57.814222649" Nov 6 05:55:34.111478 kubelet[2932]: I1106 05:55:34.110392 2932 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47cr6\" (UniqueName: \"kubernetes.io/projected/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-kube-api-access-47cr6\") on node \"srv-dhf6q.gb1.brightbox.com\" DevicePath \"\"" Nov 6 05:55:34.111478 kubelet[2932]: I1106 05:55:34.111387 2932 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-ca-bundle\") on node \"srv-dhf6q.gb1.brightbox.com\" DevicePath \"\"" Nov 6 05:55:34.111478 kubelet[2932]: I1106 05:55:34.111430 2932 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bbb1a0ea-189c-4a8a-9af6-93e27581e04c-whisker-backend-key-pair\") on node \"srv-dhf6q.gb1.brightbox.com\" DevicePath \"\"" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:33.915 [INFO][4149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:33.917 [INFO][4149] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" iface="eth0" netns="/var/run/netns/cni-7e3a6660-43c3-384f-ac09-0f05b069491b" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:33.920 [INFO][4149] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" iface="eth0" netns="/var/run/netns/cni-7e3a6660-43c3-384f-ac09-0f05b069491b" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:33.922 [INFO][4149] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" iface="eth0" netns="/var/run/netns/cni-7e3a6660-43c3-384f-ac09-0f05b069491b" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:33.922 [INFO][4149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:33.924 [INFO][4149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:34.271 [INFO][4160] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" HandleID="k8s-pod-network.05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:34.275 [INFO][4160] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:34.312603 containerd[1638]: 2025-11-06 05:55:34.276 [INFO][4160] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:34.313760 containerd[1638]: 2025-11-06 05:55:34.299 [WARNING][4160] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" HandleID="k8s-pod-network.05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:34.313760 containerd[1638]: 2025-11-06 05:55:34.299 [INFO][4160] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" HandleID="k8s-pod-network.05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:34.313760 containerd[1638]: 2025-11-06 05:55:34.303 [INFO][4160] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:34.313760 containerd[1638]: 2025-11-06 05:55:34.307 [INFO][4149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b" Nov 6 05:55:34.318815 systemd[1]: run-netns-cni\x2d7e3a6660\x2d43c3\x2d384f\x2dac09\x2d0f05b069491b.mount: Deactivated successfully. Nov 6 05:55:34.320602 containerd[1638]: time="2025-11-06T05:55:34.320087318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-hw8f5,Uid:6eba05f7-0eaa-45c3-8192-73bb69abd3a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:34.322794 kubelet[2932]: E1106 05:55:34.321115 2932 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 6 05:55:34.322794 kubelet[2932]: E1106 05:55:34.321259 2932 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" Nov 6 05:55:34.322794 kubelet[2932]: E1106 05:55:34.321341 2932 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" Nov 6 05:55:34.323118 kubelet[2932]: E1106 05:55:34.323076 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05514594bb153f82c66207add9c13343ecef7c03062d121b39019b4dfd53a01b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:55:34.534580 systemd[1]: Removed slice kubepods-besteffort-podbbb1a0ea_189c_4a8a_9af6_93e27581e04c.slice - libcontainer container kubepods-besteffort-podbbb1a0ea_189c_4a8a_9af6_93e27581e04c.slice. Nov 6 05:55:34.960190 containerd[1638]: time="2025-11-06T05:55:34.960034043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-hw8f5,Uid:6eba05f7-0eaa-45c3-8192-73bb69abd3a6,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:55:35.159103 systemd[1]: Created slice kubepods-besteffort-pod09d71414_7028_4395_b830_79141d516415.slice - libcontainer container kubepods-besteffort-pod09d71414_7028_4395_b830_79141d516415.slice. Nov 6 05:55:35.226545 kubelet[2932]: I1106 05:55:35.224743 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09d71414-7028-4395-b830-79141d516415-whisker-ca-bundle\") pod \"whisker-859b8f77d-fjqls\" (UID: \"09d71414-7028-4395-b830-79141d516415\") " pod="calico-system/whisker-859b8f77d-fjqls" Nov 6 05:55:35.226545 kubelet[2932]: I1106 05:55:35.224832 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msgdf\" (UniqueName: \"kubernetes.io/projected/09d71414-7028-4395-b830-79141d516415-kube-api-access-msgdf\") pod \"whisker-859b8f77d-fjqls\" (UID: \"09d71414-7028-4395-b830-79141d516415\") " pod="calico-system/whisker-859b8f77d-fjqls" Nov 6 05:55:35.226545 kubelet[2932]: I1106 05:55:35.224876 2932 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/09d71414-7028-4395-b830-79141d516415-whisker-backend-key-pair\") pod \"whisker-859b8f77d-fjqls\" (UID: \"09d71414-7028-4395-b830-79141d516415\") " pod="calico-system/whisker-859b8f77d-fjqls" Nov 6 05:55:35.394618 systemd-networkd[1532]: caliaa23cc8f3a2: Link UP Nov 6 05:55:35.396410 systemd-networkd[1532]: caliaa23cc8f3a2: Gained carrier Nov 6 05:55:35.460425 containerd[1638]: 2025-11-06 05:55:35.079 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 05:55:35.460425 containerd[1638]: 2025-11-06 05:55:35.142 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0 calico-apiserver-8cd67c9c6- calico-apiserver 6eba05f7-0eaa-45c3-8192-73bb69abd3a6 944 0 2025-11-06 05:54:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cd67c9c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com calico-apiserver-8cd67c9c6-hw8f5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaa23cc8f3a2 [] [] }} ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-" Nov 6 05:55:35.460425 containerd[1638]: 2025-11-06 05:55:35.145 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.460425 containerd[1638]: 2025-11-06 05:55:35.259 [INFO][4244] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" HandleID="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.259 [INFO][4244] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" HandleID="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032b5d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"calico-apiserver-8cd67c9c6-hw8f5", "timestamp":"2025-11-06 05:55:35.259285721 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.260 [INFO][4244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.260 [INFO][4244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.260 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.274 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.287 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.297 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.300 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.460853 containerd[1638]: 2025-11-06 05:55:35.305 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.305 [INFO][4244] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.308 [INFO][4244] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363 Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.318 [INFO][4244] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.346 [INFO][4244] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.1/26] block=192.168.47.0/26 handle="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.347 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.1/26] handle="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.347 [INFO][4244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:35.463475 containerd[1638]: 2025-11-06 05:55:35.347 [INFO][4244] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.1/26] IPv6=[] ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" HandleID="k8s-pod-network.9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.463776 containerd[1638]: 2025-11-06 05:55:35.355 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0", GenerateName:"calico-apiserver-8cd67c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6eba05f7-0eaa-45c3-8192-73bb69abd3a6", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd67c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8cd67c9c6-hw8f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa23cc8f3a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:35.463890 containerd[1638]: 2025-11-06 05:55:35.355 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.1/32] ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.463890 containerd[1638]: 2025-11-06 05:55:35.356 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa23cc8f3a2 ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.463890 containerd[1638]: 2025-11-06 05:55:35.400 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.464029 containerd[1638]: 2025-11-06 05:55:35.405 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0", GenerateName:"calico-apiserver-8cd67c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6eba05f7-0eaa-45c3-8192-73bb69abd3a6", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd67c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363", Pod:"calico-apiserver-8cd67c9c6-hw8f5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaa23cc8f3a2", MAC:"36:0a:f9:fc:26:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:35.466163 containerd[1638]: 2025-11-06 05:55:35.454 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-hw8f5" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--hw8f5-eth0" Nov 6 05:55:35.475738 containerd[1638]: time="2025-11-06T05:55:35.475500887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859b8f77d-fjqls,Uid:09d71414-7028-4395-b830-79141d516415,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:35.775200 containerd[1638]: time="2025-11-06T05:55:35.775062644Z" level=info msg="connecting to shim 9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363" address="unix:///run/containerd/s/402a257a8f697b2da2948fc0c65fc01aad74611b928f17518ca1f472113945d4" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:35.823328 systemd-networkd[1532]: calie7bec9be648: Link UP Nov 6 05:55:35.825941 systemd-networkd[1532]: calie7bec9be648: Gained carrier Nov 6 05:55:35.872892 systemd[1]: Started cri-containerd-9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363.scope - libcontainer container 9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363. Nov 6 05:55:35.881508 containerd[1638]: 2025-11-06 05:55:35.580 [INFO][4260] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 6 05:55:35.881508 containerd[1638]: 2025-11-06 05:55:35.616 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0 whisker-859b8f77d- calico-system 09d71414-7028-4395-b830-79141d516415 974 0 2025-11-06 05:55:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:859b8f77d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com whisker-859b8f77d-fjqls eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie7bec9be648 [] [] }} ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-" Nov 6 05:55:35.881508 containerd[1638]: 2025-11-06 05:55:35.616 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.881508 containerd[1638]: 2025-11-06 05:55:35.709 [INFO][4287] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" HandleID="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Workload="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.711 [INFO][4287] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" HandleID="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Workload="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f700), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"whisker-859b8f77d-fjqls", "timestamp":"2025-11-06 05:55:35.709735948 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.711 [INFO][4287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.711 [INFO][4287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.711 [INFO][4287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.724 [INFO][4287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.736 [INFO][4287] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.754 [INFO][4287] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.762 [INFO][4287] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883062 containerd[1638]: 2025-11-06 05:55:35.766 [INFO][4287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.766 [INFO][4287] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.771 [INFO][4287] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807 Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.782 [INFO][4287] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.797 [INFO][4287] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.2/26] block=192.168.47.0/26 handle="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.797 [INFO][4287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.2/26] handle="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.797 [INFO][4287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:35.883543 containerd[1638]: 2025-11-06 05:55:35.798 [INFO][4287] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.2/26] IPv6=[] ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" HandleID="k8s-pod-network.2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Workload="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.883837 containerd[1638]: 2025-11-06 05:55:35.813 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0", GenerateName:"whisker-859b8f77d-", Namespace:"calico-system", SelfLink:"", UID:"09d71414-7028-4395-b830-79141d516415", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"859b8f77d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"whisker-859b8f77d-fjqls", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie7bec9be648", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:35.883837 containerd[1638]: 2025-11-06 05:55:35.813 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.2/32] ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.883985 containerd[1638]: 2025-11-06 05:55:35.813 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie7bec9be648 ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.883985 containerd[1638]: 2025-11-06 05:55:35.828 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.884090 containerd[1638]: 2025-11-06 05:55:35.829 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0", GenerateName:"whisker-859b8f77d-", Namespace:"calico-system", SelfLink:"", UID:"09d71414-7028-4395-b830-79141d516415", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"859b8f77d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807", Pod:"whisker-859b8f77d-fjqls", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.47.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie7bec9be648", MAC:"de:60:d9:71:df:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:35.884723 containerd[1638]: 2025-11-06 05:55:35.860 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" Namespace="calico-system" Pod="whisker-859b8f77d-fjqls" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-whisker--859b8f77d--fjqls-eth0" Nov 6 05:55:35.950933 containerd[1638]: time="2025-11-06T05:55:35.950834568Z" level=info msg="connecting to shim 2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807" address="unix:///run/containerd/s/bb27f64fd70835f8563bdfdc4cafe63539a9879785b2a64dad7db7d006bbd231" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:36.041842 systemd[1]: Started cri-containerd-2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807.scope - libcontainer container 2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807. Nov 6 05:55:36.136703 containerd[1638]: time="2025-11-06T05:55:36.136597729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-hw8f5,Uid:6eba05f7-0eaa-45c3-8192-73bb69abd3a6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9dc5724726e8d79facede4c56a9b4b89fbe4252a0c1b5680ecb5828b6649b363\"" Nov 6 05:55:36.154361 containerd[1638]: time="2025-11-06T05:55:36.152933533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:55:36.265213 containerd[1638]: time="2025-11-06T05:55:36.265126987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859b8f77d-fjqls,Uid:09d71414-7028-4395-b830-79141d516415,Namespace:calico-system,Attempt:0,} returns sandbox id \"2647597bf57e007fd0e148a99339582e05e98852552f388417dd1da641006807\"" Nov 6 05:55:36.527649 containerd[1638]: time="2025-11-06T05:55:36.527360303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:36.529101 containerd[1638]: time="2025-11-06T05:55:36.528782310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:55:36.529425 containerd[1638]: time="2025-11-06T05:55:36.529200509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:36.574591 kubelet[2932]: I1106 05:55:36.574482 2932 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbb1a0ea-189c-4a8a-9af6-93e27581e04c" path="/var/lib/kubelet/pods/bbb1a0ea-189c-4a8a-9af6-93e27581e04c/volumes" Nov 6 05:55:36.577096 kubelet[2932]: E1106 05:55:36.568574 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:36.577317 kubelet[2932]: E1106 05:55:36.577273 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:36.579805 containerd[1638]: time="2025-11-06T05:55:36.579047228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:55:36.586472 kubelet[2932]: E1106 05:55:36.586029 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xltmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:36.587973 kubelet[2932]: E1106 05:55:36.587905 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:55:36.890719 systemd-networkd[1532]: calie7bec9be648: Gained IPv6LL Nov 6 05:55:36.913432 containerd[1638]: time="2025-11-06T05:55:36.913327295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:36.924318 containerd[1638]: time="2025-11-06T05:55:36.924245901Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:55:36.925220 containerd[1638]: time="2025-11-06T05:55:36.924293139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:36.925606 kubelet[2932]: E1106 05:55:36.925365 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:55:36.925606 kubelet[2932]: E1106 05:55:36.925445 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:55:36.925962 kubelet[2932]: E1106 05:55:36.925899 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:701edbcabe75434aab47d246a6d809dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:36.929172 containerd[1638]: time="2025-11-06T05:55:36.929087361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:55:36.980807 kubelet[2932]: E1106 05:55:36.980383 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:55:37.205501 systemd-networkd[1532]: caliaa23cc8f3a2: Gained IPv6LL Nov 6 05:55:37.239802 containerd[1638]: time="2025-11-06T05:55:37.239649088Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:37.243033 containerd[1638]: time="2025-11-06T05:55:37.242957829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:55:37.243664 containerd[1638]: time="2025-11-06T05:55:37.243280158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:37.245563 kubelet[2932]: E1106 05:55:37.244594 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:55:37.245563 kubelet[2932]: E1106 05:55:37.244707 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:55:37.247689 kubelet[2932]: E1106 05:55:37.246480 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:37.250291 kubelet[2932]: E1106 05:55:37.250207 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:55:37.412855 systemd-networkd[1532]: vxlan.calico: Link UP Nov 6 05:55:37.412867 systemd-networkd[1532]: vxlan.calico: Gained carrier Nov 6 05:55:37.983632 kubelet[2932]: E1106 05:55:37.983058 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:55:37.985365 kubelet[2932]: E1106 05:55:37.984569 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:55:38.485415 systemd-networkd[1532]: vxlan.calico: Gained IPv6LL Nov 6 05:55:41.520187 containerd[1638]: time="2025-11-06T05:55:41.519906031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26ttf,Uid:395b50d4-a01a-4208-bbc6-1cef03cb976b,Namespace:kube-system,Attempt:0,}" Nov 6 05:55:41.742644 systemd-networkd[1532]: cali81addfdecde: Link UP Nov 6 05:55:41.744914 systemd-networkd[1532]: cali81addfdecde: Gained carrier Nov 6 05:55:41.781130 containerd[1638]: 2025-11-06 05:55:41.586 [INFO][4611] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0 coredns-674b8bbfcf- kube-system 395b50d4-a01a-4208-bbc6-1cef03cb976b 869 0 2025-11-06 05:54:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com coredns-674b8bbfcf-26ttf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali81addfdecde [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-" Nov 6 05:55:41.781130 containerd[1638]: 2025-11-06 05:55:41.586 [INFO][4611] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.781130 containerd[1638]: 2025-11-06 05:55:41.634 [INFO][4624] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" HandleID="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Workload="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.634 [INFO][4624] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" HandleID="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Workload="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c55b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-26ttf", "timestamp":"2025-11-06 05:55:41.634206215 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.635 [INFO][4624] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.635 [INFO][4624] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.635 [INFO][4624] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.647 [INFO][4624] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.688 [INFO][4624] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.699 [INFO][4624] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.702 [INFO][4624] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.781919 containerd[1638]: 2025-11-06 05:55:41.705 [INFO][4624] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.705 [INFO][4624] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.708 [INFO][4624] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.713 [INFO][4624] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.722 [INFO][4624] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.3/26] block=192.168.47.0/26 handle="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.723 [INFO][4624] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.3/26] handle="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.723 [INFO][4624] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:41.784122 containerd[1638]: 2025-11-06 05:55:41.723 [INFO][4624] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.3/26] IPv6=[] ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" HandleID="k8s-pod-network.24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Workload="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.785895 containerd[1638]: 2025-11-06 05:55:41.729 [INFO][4611] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"395b50d4-a01a-4208-bbc6-1cef03cb976b", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-26ttf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81addfdecde", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:41.785895 containerd[1638]: 2025-11-06 05:55:41.731 [INFO][4611] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.3/32] ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.785895 containerd[1638]: 2025-11-06 05:55:41.731 [INFO][4611] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81addfdecde ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.785895 containerd[1638]: 2025-11-06 05:55:41.747 [INFO][4611] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.785895 containerd[1638]: 2025-11-06 05:55:41.748 [INFO][4611] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"395b50d4-a01a-4208-bbc6-1cef03cb976b", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f", Pod:"coredns-674b8bbfcf-26ttf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali81addfdecde", MAC:"06:73:c0:ac:97:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:41.785895 containerd[1638]: 2025-11-06 05:55:41.765 [INFO][4611] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" Namespace="kube-system" Pod="coredns-674b8bbfcf-26ttf" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--26ttf-eth0" Nov 6 05:55:41.825492 containerd[1638]: time="2025-11-06T05:55:41.825413700Z" level=info msg="connecting to shim 24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f" address="unix:///run/containerd/s/35c5df869bfd6f26a550d622eaed546ad21bdf130d67277299c13ab24d81a48a" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:41.878399 systemd[1]: Started cri-containerd-24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f.scope - libcontainer container 24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f. Nov 6 05:55:41.960959 containerd[1638]: time="2025-11-06T05:55:41.960820078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26ttf,Uid:395b50d4-a01a-4208-bbc6-1cef03cb976b,Namespace:kube-system,Attempt:0,} returns sandbox id \"24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f\"" Nov 6 05:55:41.970593 containerd[1638]: time="2025-11-06T05:55:41.970517243Z" level=info msg="CreateContainer within sandbox \"24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 05:55:41.995651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3468622106.mount: Deactivated successfully. Nov 6 05:55:41.996455 containerd[1638]: time="2025-11-06T05:55:41.996048862Z" level=info msg="Container ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:55:42.011495 containerd[1638]: time="2025-11-06T05:55:42.011029710Z" level=info msg="CreateContainer within sandbox \"24b101bd8f8c8ca98e4a41c6968ac4230006bf48fdd2357deabdcb669702177f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9\"" Nov 6 05:55:42.012579 containerd[1638]: time="2025-11-06T05:55:42.012458907Z" level=info msg="StartContainer for \"ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9\"" Nov 6 05:55:42.015969 containerd[1638]: time="2025-11-06T05:55:42.015836364Z" level=info msg="connecting to shim ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9" address="unix:///run/containerd/s/35c5df869bfd6f26a550d622eaed546ad21bdf130d67277299c13ab24d81a48a" protocol=ttrpc version=3 Nov 6 05:55:42.050371 systemd[1]: Started cri-containerd-ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9.scope - libcontainer container ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9. Nov 6 05:55:42.109936 containerd[1638]: time="2025-11-06T05:55:42.109873067Z" level=info msg="StartContainer for \"ccdbb397083da5ffb26aae88eb73106915e541f1ab762fa8b99f2c5db90262f9\" returns successfully" Nov 6 05:55:42.530533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3048016203.mount: Deactivated successfully. Nov 6 05:55:42.901846 systemd-networkd[1532]: cali81addfdecde: Gained IPv6LL Nov 6 05:55:43.067195 kubelet[2932]: I1106 05:55:43.063568 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-26ttf" podStartSLOduration=60.063474987 podStartE2EDuration="1m0.063474987s" podCreationTimestamp="2025-11-06 05:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:55:43.041822626 +0000 UTC m=+66.788803463" watchObservedRunningTime="2025-11-06 05:55:43.063474987 +0000 UTC m=+66.810455812" Nov 6 05:55:43.521331 containerd[1638]: time="2025-11-06T05:55:43.521206494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vklp,Uid:9a7a74a3-52cd-4b77-ac72-984211b63b0e,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:43.703291 systemd-networkd[1532]: cali54457362c43: Link UP Nov 6 05:55:43.704699 systemd-networkd[1532]: cali54457362c43: Gained carrier Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.596 [INFO][4737] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0 goldmane-666569f655- calico-system 9a7a74a3-52cd-4b77-ac72-984211b63b0e 874 0 2025-11-06 05:54:59 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com goldmane-666569f655-4vklp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali54457362c43 [] [] }} ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.597 [INFO][4737] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.645 [INFO][4748] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" HandleID="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Workload="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.646 [INFO][4748] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" HandleID="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Workload="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e240), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"goldmane-666569f655-4vklp", "timestamp":"2025-11-06 05:55:43.645970164 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.646 [INFO][4748] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.646 [INFO][4748] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.646 [INFO][4748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.657 [INFO][4748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.664 [INFO][4748] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.672 [INFO][4748] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.674 [INFO][4748] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.677 [INFO][4748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.677 [INFO][4748] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.680 [INFO][4748] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.685 [INFO][4748] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.694 [INFO][4748] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.4/26] block=192.168.47.0/26 handle="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.694 [INFO][4748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.4/26] handle="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.694 [INFO][4748] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:43.736114 containerd[1638]: 2025-11-06 05:55:43.694 [INFO][4748] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.4/26] IPv6=[] ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" HandleID="k8s-pod-network.1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Workload="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.738854 containerd[1638]: 2025-11-06 05:55:43.698 [INFO][4737] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9a7a74a3-52cd-4b77-ac72-984211b63b0e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-4vklp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali54457362c43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:43.738854 containerd[1638]: 2025-11-06 05:55:43.698 [INFO][4737] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.4/32] ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.738854 containerd[1638]: 2025-11-06 05:55:43.698 [INFO][4737] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54457362c43 ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.738854 containerd[1638]: 2025-11-06 05:55:43.705 [INFO][4737] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.738854 containerd[1638]: 2025-11-06 05:55:43.706 [INFO][4737] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9a7a74a3-52cd-4b77-ac72-984211b63b0e", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b", Pod:"goldmane-666569f655-4vklp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.47.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali54457362c43", MAC:"ba:b8:ed:99:47:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:43.738854 containerd[1638]: 2025-11-06 05:55:43.732 [INFO][4737] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" Namespace="calico-system" Pod="goldmane-666569f655-4vklp" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-goldmane--666569f655--4vklp-eth0" Nov 6 05:55:43.778332 containerd[1638]: time="2025-11-06T05:55:43.777809187Z" level=info msg="connecting to shim 1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b" address="unix:///run/containerd/s/94ea4065f95ea728ea41a201f5cfa3f76b525f2de255b3a0a225a9199d4e849f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:43.825433 systemd[1]: Started cri-containerd-1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b.scope - libcontainer container 1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b. Nov 6 05:55:43.905784 containerd[1638]: time="2025-11-06T05:55:43.905675071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-4vklp,Uid:9a7a74a3-52cd-4b77-ac72-984211b63b0e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f13981542d7778a01cfc78a8d03056ad44162f89b6eb0ddc7a74cf481cdc89b\"" Nov 6 05:55:43.908537 containerd[1638]: time="2025-11-06T05:55:43.908422889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:55:44.226994 containerd[1638]: time="2025-11-06T05:55:44.226883135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:44.228403 containerd[1638]: time="2025-11-06T05:55:44.228238180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:55:44.228403 containerd[1638]: time="2025-11-06T05:55:44.228348213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:44.228879 kubelet[2932]: E1106 05:55:44.228764 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:55:44.229451 kubelet[2932]: E1106 05:55:44.228926 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:55:44.229451 kubelet[2932]: E1106 05:55:44.229333 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68xk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:44.231572 kubelet[2932]: E1106 05:55:44.231512 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:55:44.520963 containerd[1638]: time="2025-11-06T05:55:44.520680567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-zmk4q,Uid:da766196-4491-43f8-a7f4-97b6b2ff4f0a,Namespace:calico-apiserver,Attempt:0,}" Nov 6 05:55:44.521837 containerd[1638]: time="2025-11-06T05:55:44.521121936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbnqx,Uid:7ed9c04f-448d-4add-ac51-b62b66fb2d29,Namespace:kube-system,Attempt:0,}" Nov 6 05:55:44.736579 systemd-networkd[1532]: cali7cc7a41a2ab: Link UP Nov 6 05:55:44.738811 systemd-networkd[1532]: cali7cc7a41a2ab: Gained carrier Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.606 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0 coredns-674b8bbfcf- kube-system 7ed9c04f-448d-4add-ac51-b62b66fb2d29 875 0 2025-11-06 05:54:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com coredns-674b8bbfcf-pbnqx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7cc7a41a2ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.606 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.666 [INFO][4842] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" HandleID="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Workload="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.666 [INFO][4842] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" HandleID="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Workload="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-pbnqx", "timestamp":"2025-11-06 05:55:44.666463901 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.666 [INFO][4842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.666 [INFO][4842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.667 [INFO][4842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.684 [INFO][4842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.694 [INFO][4842] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.700 [INFO][4842] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.703 [INFO][4842] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.706 [INFO][4842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.706 [INFO][4842] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.708 [INFO][4842] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7 Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.713 [INFO][4842] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.721 [INFO][4842] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.5/26] block=192.168.47.0/26 handle="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.721 [INFO][4842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.5/26] handle="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.721 [INFO][4842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:44.774545 containerd[1638]: 2025-11-06 05:55:44.721 [INFO][4842] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.5/26] IPv6=[] ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" HandleID="k8s-pod-network.3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Workload="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.779025 containerd[1638]: 2025-11-06 05:55:44.727 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ed9c04f-448d-4add-ac51-b62b66fb2d29", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-pbnqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cc7a41a2ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:44.779025 containerd[1638]: 2025-11-06 05:55:44.727 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.5/32] ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.779025 containerd[1638]: 2025-11-06 05:55:44.727 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cc7a41a2ab ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.779025 containerd[1638]: 2025-11-06 05:55:44.737 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.779025 containerd[1638]: 2025-11-06 05:55:44.738 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7ed9c04f-448d-4add-ac51-b62b66fb2d29", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7", Pod:"coredns-674b8bbfcf-pbnqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7cc7a41a2ab", MAC:"ea:05:13:73:11:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:44.779025 containerd[1638]: 2025-11-06 05:55:44.765 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" Namespace="kube-system" Pod="coredns-674b8bbfcf-pbnqx" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-coredns--674b8bbfcf--pbnqx-eth0" Nov 6 05:55:44.840635 containerd[1638]: time="2025-11-06T05:55:44.839378885Z" level=info msg="connecting to shim 3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7" address="unix:///run/containerd/s/c05cb2e6e564d218f9cee9cab15b8b26f6fdea81b0d9549c230f665308174f63" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:44.901477 systemd-networkd[1532]: cali0320bdcd0cd: Link UP Nov 6 05:55:44.904388 systemd-networkd[1532]: cali0320bdcd0cd: Gained carrier Nov 6 05:55:44.940428 systemd[1]: Started cri-containerd-3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7.scope - libcontainer container 3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7. Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.619 [INFO][4817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0 calico-apiserver-8cd67c9c6- calico-apiserver da766196-4491-43f8-a7f4-97b6b2ff4f0a 871 0 2025-11-06 05:54:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8cd67c9c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com calico-apiserver-8cd67c9c6-zmk4q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0320bdcd0cd [] [] }} ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.619 [INFO][4817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.684 [INFO][4847] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" HandleID="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.684 [INFO][4847] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" HandleID="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000321e50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"calico-apiserver-8cd67c9c6-zmk4q", "timestamp":"2025-11-06 05:55:44.684167047 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.685 [INFO][4847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.721 [INFO][4847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.721 [INFO][4847] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.789 [INFO][4847] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.806 [INFO][4847] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.818 [INFO][4847] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.831 [INFO][4847] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.843 [INFO][4847] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.844 [INFO][4847] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.848 [INFO][4847] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205 Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.860 [INFO][4847] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.880 [INFO][4847] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.6/26] block=192.168.47.0/26 handle="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.881 [INFO][4847] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.6/26] handle="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.881 [INFO][4847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:44.945040 containerd[1638]: 2025-11-06 05:55:44.881 [INFO][4847] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.6/26] IPv6=[] ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" HandleID="k8s-pod-network.d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.946595 containerd[1638]: 2025-11-06 05:55:44.889 [INFO][4817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0", GenerateName:"calico-apiserver-8cd67c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"da766196-4491-43f8-a7f4-97b6b2ff4f0a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd67c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8cd67c9c6-zmk4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0320bdcd0cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:44.946595 containerd[1638]: 2025-11-06 05:55:44.892 [INFO][4817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.6/32] ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.946595 containerd[1638]: 2025-11-06 05:55:44.892 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0320bdcd0cd ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.946595 containerd[1638]: 2025-11-06 05:55:44.904 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.946595 containerd[1638]: 2025-11-06 05:55:44.905 [INFO][4817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0", GenerateName:"calico-apiserver-8cd67c9c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"da766196-4491-43f8-a7f4-97b6b2ff4f0a", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 54, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8cd67c9c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205", Pod:"calico-apiserver-8cd67c9c6-zmk4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0320bdcd0cd", MAC:"96:76:4e:64:ee:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:44.946595 containerd[1638]: 2025-11-06 05:55:44.931 [INFO][4817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" Namespace="calico-apiserver" Pod="calico-apiserver-8cd67c9c6-zmk4q" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--apiserver--8cd67c9c6--zmk4q-eth0" Nov 6 05:55:44.988375 containerd[1638]: time="2025-11-06T05:55:44.988243008Z" level=info msg="connecting to shim d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205" address="unix:///run/containerd/s/5c289bdc7be17ec51029db1666ad79e4ba38fc7bee4c26a7b4fed7b7d277028f" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:45.029638 kubelet[2932]: E1106 05:55:45.027543 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:55:45.049403 systemd[1]: Started cri-containerd-d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205.scope - libcontainer container d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205. Nov 6 05:55:45.097815 containerd[1638]: time="2025-11-06T05:55:45.097720234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pbnqx,Uid:7ed9c04f-448d-4add-ac51-b62b66fb2d29,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7\"" Nov 6 05:55:45.109798 containerd[1638]: time="2025-11-06T05:55:45.109633452Z" level=info msg="CreateContainer within sandbox \"3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 05:55:45.133051 containerd[1638]: time="2025-11-06T05:55:45.133004273Z" level=info msg="Container 2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79: CDI devices from CRI Config.CDIDevices: []" Nov 6 05:55:45.145986 containerd[1638]: time="2025-11-06T05:55:45.145596760Z" level=info msg="CreateContainer within sandbox \"3e6bd821a9235af1323c9d833a991eac4eae2edd9ef5bf07e2421d117c16d8b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79\"" Nov 6 05:55:45.148181 containerd[1638]: time="2025-11-06T05:55:45.148113916Z" level=info msg="StartContainer for \"2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79\"" Nov 6 05:55:45.150526 containerd[1638]: time="2025-11-06T05:55:45.150487174Z" level=info msg="connecting to shim 2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79" address="unix:///run/containerd/s/c05cb2e6e564d218f9cee9cab15b8b26f6fdea81b0d9549c230f665308174f63" protocol=ttrpc version=3 Nov 6 05:55:45.196369 systemd[1]: Started cri-containerd-2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79.scope - libcontainer container 2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79. Nov 6 05:55:45.217094 containerd[1638]: time="2025-11-06T05:55:45.217015385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8cd67c9c6-zmk4q,Uid:da766196-4491-43f8-a7f4-97b6b2ff4f0a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d2c24b26c8953e860bdfd102d04d3acf6bd49936a6a61a7f4e628144e02ef205\"" Nov 6 05:55:45.222013 containerd[1638]: time="2025-11-06T05:55:45.221952864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:55:45.258669 containerd[1638]: time="2025-11-06T05:55:45.258614092Z" level=info msg="StartContainer for \"2d8230ac8fa8973f3bb5b2e8266f902efca73a84bb7fa2cec36fcdd7c8f02c79\" returns successfully" Nov 6 05:55:45.520852 containerd[1638]: time="2025-11-06T05:55:45.520484208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859c8f8bcd-gw95r,Uid:5513e9ee-51f3-4098-ad93-0cffcda4f037,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:45.528256 systemd-networkd[1532]: cali54457362c43: Gained IPv6LL Nov 6 05:55:45.541177 containerd[1638]: time="2025-11-06T05:55:45.541101836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:45.543263 containerd[1638]: time="2025-11-06T05:55:45.542737484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:55:45.543263 containerd[1638]: time="2025-11-06T05:55:45.542859008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:45.543378 kubelet[2932]: E1106 05:55:45.543072 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:45.543378 kubelet[2932]: E1106 05:55:45.543161 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:45.545227 kubelet[2932]: E1106 05:55:45.544446 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9hmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:45.546596 kubelet[2932]: E1106 05:55:45.546423 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:55:45.695729 systemd-networkd[1532]: cali9446141c8c6: Link UP Nov 6 05:55:45.697426 systemd-networkd[1532]: cali9446141c8c6: Gained carrier Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.594 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0 calico-kube-controllers-859c8f8bcd- calico-system 5513e9ee-51f3-4098-ad93-0cffcda4f037 873 0 2025-11-06 05:55:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:859c8f8bcd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com calico-kube-controllers-859c8f8bcd-gw95r eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9446141c8c6 [] [] }} ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.594 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.636 [INFO][5016] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" HandleID="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.637 [INFO][5016] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" HandleID="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"calico-kube-controllers-859c8f8bcd-gw95r", "timestamp":"2025-11-06 05:55:45.636931629 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.637 [INFO][5016] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.637 [INFO][5016] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.637 [INFO][5016] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.648 [INFO][5016] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.655 [INFO][5016] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.662 [INFO][5016] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.665 [INFO][5016] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.668 [INFO][5016] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.668 [INFO][5016] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.671 [INFO][5016] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675 Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.678 [INFO][5016] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.687 [INFO][5016] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.7/26] block=192.168.47.0/26 handle="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.687 [INFO][5016] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.7/26] handle="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.688 [INFO][5016] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:45.721539 containerd[1638]: 2025-11-06 05:55:45.688 [INFO][5016] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.7/26] IPv6=[] ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" HandleID="k8s-pod-network.94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Workload="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.724553 containerd[1638]: 2025-11-06 05:55:45.691 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0", GenerateName:"calico-kube-controllers-859c8f8bcd-", Namespace:"calico-system", SelfLink:"", UID:"5513e9ee-51f3-4098-ad93-0cffcda4f037", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 55, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"859c8f8bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-859c8f8bcd-gw95r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9446141c8c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:45.724553 containerd[1638]: 2025-11-06 05:55:45.691 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.7/32] ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.724553 containerd[1638]: 2025-11-06 05:55:45.692 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9446141c8c6 ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.724553 containerd[1638]: 2025-11-06 05:55:45.699 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.724553 containerd[1638]: 2025-11-06 05:55:45.700 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0", GenerateName:"calico-kube-controllers-859c8f8bcd-", Namespace:"calico-system", SelfLink:"", UID:"5513e9ee-51f3-4098-ad93-0cffcda4f037", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 55, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"859c8f8bcd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675", Pod:"calico-kube-controllers-859c8f8bcd-gw95r", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9446141c8c6", MAC:"e6:9e:18:3c:e7:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:45.724553 containerd[1638]: 2025-11-06 05:55:45.716 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" Namespace="calico-system" Pod="calico-kube-controllers-859c8f8bcd-gw95r" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-calico--kube--controllers--859c8f8bcd--gw95r-eth0" Nov 6 05:55:45.764001 containerd[1638]: time="2025-11-06T05:55:45.762274501Z" level=info msg="connecting to shim 94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675" address="unix:///run/containerd/s/0bddd2dd7b0f4c534ebbd27b618589db0d0ff409a4f02f34d37d8ea570f70537" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:45.797840 systemd[1]: Started cri-containerd-94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675.scope - libcontainer container 94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675. Nov 6 05:55:45.822332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084596651.mount: Deactivated successfully. Nov 6 05:55:45.883860 containerd[1638]: time="2025-11-06T05:55:45.883778564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-859c8f8bcd-gw95r,Uid:5513e9ee-51f3-4098-ad93-0cffcda4f037,Namespace:calico-system,Attempt:0,} returns sandbox id \"94a1f179e422716baeac332d20654d71fffe95a5b8a14970d45f88a01e264675\"" Nov 6 05:55:45.886939 containerd[1638]: time="2025-11-06T05:55:45.886839764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:55:45.973499 systemd-networkd[1532]: cali7cc7a41a2ab: Gained IPv6LL Nov 6 05:55:46.038901 kubelet[2932]: E1106 05:55:46.038778 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:55:46.078123 kubelet[2932]: I1106 05:55:46.077640 2932 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pbnqx" podStartSLOduration=63.077594182 podStartE2EDuration="1m3.077594182s" podCreationTimestamp="2025-11-06 05:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 05:55:46.056674805 +0000 UTC m=+69.803655637" watchObservedRunningTime="2025-11-06 05:55:46.077594182 +0000 UTC m=+69.824575002" Nov 6 05:55:46.209444 containerd[1638]: time="2025-11-06T05:55:46.209327747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:46.211321 containerd[1638]: time="2025-11-06T05:55:46.211180596Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:55:46.211321 containerd[1638]: time="2025-11-06T05:55:46.211249168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:46.211691 kubelet[2932]: E1106 05:55:46.211624 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:55:46.211846 kubelet[2932]: E1106 05:55:46.211702 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:55:46.212185 kubelet[2932]: E1106 05:55:46.211929 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w4wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:46.214242 kubelet[2932]: E1106 05:55:46.214204 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:55:46.549574 systemd-networkd[1532]: cali0320bdcd0cd: Gained IPv6LL Nov 6 05:55:47.042966 kubelet[2932]: E1106 05:55:47.042866 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:55:47.044773 kubelet[2932]: E1106 05:55:47.043493 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:55:47.520524 containerd[1638]: time="2025-11-06T05:55:47.520443245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-298gz,Uid:9b7c3e80-078e-47aa-8574-17ccfe24f839,Namespace:calico-system,Attempt:0,}" Nov 6 05:55:47.639317 systemd-networkd[1532]: cali9446141c8c6: Gained IPv6LL Nov 6 05:55:47.723526 systemd-networkd[1532]: cali660aa5aee35: Link UP Nov 6 05:55:47.723864 systemd-networkd[1532]: cali660aa5aee35: Gained carrier Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.595 [INFO][5082] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0 csi-node-driver- calico-system 9b7c3e80-078e-47aa-8574-17ccfe24f839 751 0 2025-11-06 05:55:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-dhf6q.gb1.brightbox.com csi-node-driver-298gz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali660aa5aee35 [] [] }} ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.595 [INFO][5082] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.649 [INFO][5094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" HandleID="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Workload="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.649 [INFO][5094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" HandleID="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Workload="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-dhf6q.gb1.brightbox.com", "pod":"csi-node-driver-298gz", "timestamp":"2025-11-06 05:55:47.64922203 +0000 UTC"}, Hostname:"srv-dhf6q.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.649 [INFO][5094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.649 [INFO][5094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.649 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-dhf6q.gb1.brightbox.com' Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.665 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.673 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.681 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.684 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.688 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.688 [INFO][5094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.690 [INFO][5094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21 Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.696 [INFO][5094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.712 [INFO][5094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.47.8/26] block=192.168.47.0/26 handle="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.712 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.47.8/26] handle="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" host="srv-dhf6q.gb1.brightbox.com" Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.712 [INFO][5094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 6 05:55:47.746781 containerd[1638]: 2025-11-06 05:55:47.712 [INFO][5094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.47.8/26] IPv6=[] ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" HandleID="k8s-pod-network.8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Workload="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.750000 containerd[1638]: 2025-11-06 05:55:47.717 [INFO][5082] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b7c3e80-078e-47aa-8574-17ccfe24f839", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 55, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-298gz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali660aa5aee35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:47.750000 containerd[1638]: 2025-11-06 05:55:47.717 [INFO][5082] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.47.8/32] ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.750000 containerd[1638]: 2025-11-06 05:55:47.717 [INFO][5082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali660aa5aee35 ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.750000 containerd[1638]: 2025-11-06 05:55:47.722 [INFO][5082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.750000 containerd[1638]: 2025-11-06 05:55:47.723 [INFO][5082] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b7c3e80-078e-47aa-8574-17ccfe24f839", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2025, time.November, 6, 5, 55, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-dhf6q.gb1.brightbox.com", ContainerID:"8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21", Pod:"csi-node-driver-298gz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali660aa5aee35", MAC:"9e:97:7a:b9:61:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 6 05:55:47.750000 containerd[1638]: 2025-11-06 05:55:47.741 [INFO][5082] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" Namespace="calico-system" Pod="csi-node-driver-298gz" WorkloadEndpoint="srv--dhf6q.gb1.brightbox.com-k8s-csi--node--driver--298gz-eth0" Nov 6 05:55:47.794174 containerd[1638]: time="2025-11-06T05:55:47.793894449Z" level=info msg="connecting to shim 8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21" address="unix:///run/containerd/s/6c894e14e65e3b05eed36aa0ce68919ae70561f267a0ce411fc65e9c9f4451d2" namespace=k8s.io protocol=ttrpc version=3 Nov 6 05:55:47.837353 systemd[1]: Started cri-containerd-8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21.scope - libcontainer container 8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21. Nov 6 05:55:47.887958 containerd[1638]: time="2025-11-06T05:55:47.887777701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-298gz,Uid:9b7c3e80-078e-47aa-8574-17ccfe24f839,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d7d92f54848c1b539501b0c7759d7966dbb3bf67c20502301b6c0dc8bdd8f21\"" Nov 6 05:55:47.891535 containerd[1638]: time="2025-11-06T05:55:47.891453724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:55:48.202638 containerd[1638]: time="2025-11-06T05:55:48.202544601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:48.203954 containerd[1638]: time="2025-11-06T05:55:48.203889968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:55:48.204156 containerd[1638]: time="2025-11-06T05:55:48.204051249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:48.204670 kubelet[2932]: E1106 05:55:48.204403 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:55:48.204670 kubelet[2932]: E1106 05:55:48.204467 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:55:48.210778 kubelet[2932]: E1106 05:55:48.210713 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flp8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:48.212954 containerd[1638]: time="2025-11-06T05:55:48.212917550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:55:48.522643 containerd[1638]: time="2025-11-06T05:55:48.522396315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:48.525554 containerd[1638]: time="2025-11-06T05:55:48.525451904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:48.525554 containerd[1638]: time="2025-11-06T05:55:48.525467495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:55:48.525908 kubelet[2932]: E1106 05:55:48.525860 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:55:48.525987 kubelet[2932]: E1106 05:55:48.525926 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:55:48.526334 kubelet[2932]: E1106 05:55:48.526124 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flp8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:48.527425 kubelet[2932]: E1106 05:55:48.527354 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:48.790833 systemd-networkd[1532]: cali660aa5aee35: Gained IPv6LL Nov 6 05:55:49.061230 kubelet[2932]: E1106 05:55:49.061001 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:55:49.522538 containerd[1638]: time="2025-11-06T05:55:49.522432498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:55:49.837467 containerd[1638]: time="2025-11-06T05:55:49.836976926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:49.839558 containerd[1638]: time="2025-11-06T05:55:49.839430302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:55:49.839756 containerd[1638]: time="2025-11-06T05:55:49.839543776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:49.839970 kubelet[2932]: E1106 05:55:49.839903 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:55:49.841263 kubelet[2932]: E1106 05:55:49.839981 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:55:49.841263 kubelet[2932]: E1106 05:55:49.840201 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:701edbcabe75434aab47d246a6d809dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:49.843431 containerd[1638]: time="2025-11-06T05:55:49.843312314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:55:50.150546 containerd[1638]: time="2025-11-06T05:55:50.150437225Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:50.151834 containerd[1638]: time="2025-11-06T05:55:50.151704983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:55:50.151834 containerd[1638]: time="2025-11-06T05:55:50.151785796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:50.152151 kubelet[2932]: E1106 05:55:50.152086 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:55:50.152281 kubelet[2932]: E1106 05:55:50.152212 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:55:50.152862 kubelet[2932]: E1106 05:55:50.152771 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:50.154715 kubelet[2932]: E1106 05:55:50.154646 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:55:50.523972 containerd[1638]: time="2025-11-06T05:55:50.523410850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:55:50.838848 containerd[1638]: time="2025-11-06T05:55:50.838458692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:50.839952 containerd[1638]: time="2025-11-06T05:55:50.839751178Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:55:50.839952 containerd[1638]: time="2025-11-06T05:55:50.839897546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:50.840324 kubelet[2932]: E1106 05:55:50.840155 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:50.841154 kubelet[2932]: E1106 05:55:50.840764 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:50.841154 kubelet[2932]: E1106 05:55:50.841036 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xltmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:50.842448 kubelet[2932]: E1106 05:55:50.842378 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:55:53.715448 systemd[1]: Started sshd@9-10.230.27.98:22-139.178.68.195:35048.service - OpenSSH per-connection server daemon (139.178.68.195:35048). Nov 6 05:55:54.554792 sshd[5167]: Accepted publickey for core from 139.178.68.195 port 35048 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:55:54.557584 sshd-session[5167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:55:54.568268 systemd-logind[1611]: New session 12 of user core. Nov 6 05:55:54.580407 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 05:55:55.659000 sshd[5171]: Connection closed by 139.178.68.195 port 35048 Nov 6 05:55:55.660207 sshd-session[5167]: pam_unix(sshd:session): session closed for user core Nov 6 05:55:55.671332 systemd[1]: sshd@9-10.230.27.98:22-139.178.68.195:35048.service: Deactivated successfully. Nov 6 05:55:55.678106 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 05:55:55.684118 systemd-logind[1611]: Session 12 logged out. Waiting for processes to exit. Nov 6 05:55:55.687417 systemd-logind[1611]: Removed session 12. Nov 6 05:55:57.521420 containerd[1638]: time="2025-11-06T05:55:57.521110054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:55:57.857506 containerd[1638]: time="2025-11-06T05:55:57.857421226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:57.862078 containerd[1638]: time="2025-11-06T05:55:57.861851382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:55:57.862718 containerd[1638]: time="2025-11-06T05:55:57.862011495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:57.862791 kubelet[2932]: E1106 05:55:57.862659 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:55:57.862791 kubelet[2932]: E1106 05:55:57.862749 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:55:57.864770 kubelet[2932]: E1106 05:55:57.863233 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68xk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:57.865675 kubelet[2932]: E1106 05:55:57.865635 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:55:57.865897 containerd[1638]: time="2025-11-06T05:55:57.865833781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:55:58.223909 containerd[1638]: time="2025-11-06T05:55:58.222903948Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:58.228960 containerd[1638]: time="2025-11-06T05:55:58.228625670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:55:58.228960 containerd[1638]: time="2025-11-06T05:55:58.228683212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:58.229961 kubelet[2932]: E1106 05:55:58.229836 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:55:58.230661 kubelet[2932]: E1106 05:55:58.230124 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:55:58.232227 kubelet[2932]: E1106 05:55:58.231054 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w4wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:58.233436 kubelet[2932]: E1106 05:55:58.233378 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:55:59.521168 containerd[1638]: time="2025-11-06T05:55:59.520978515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:55:59.861668 containerd[1638]: time="2025-11-06T05:55:59.861546417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:55:59.863246 containerd[1638]: time="2025-11-06T05:55:59.863194467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:55:59.863347 containerd[1638]: time="2025-11-06T05:55:59.863311931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:55:59.863583 kubelet[2932]: E1106 05:55:59.863527 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:59.864626 kubelet[2932]: E1106 05:55:59.863623 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:55:59.864626 kubelet[2932]: E1106 05:55:59.863890 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9hmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:55:59.866127 kubelet[2932]: E1106 05:55:59.865294 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:56:00.527501 containerd[1638]: time="2025-11-06T05:56:00.527425083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:56:00.818984 systemd[1]: Started sshd@10-10.230.27.98:22-139.178.68.195:35050.service - OpenSSH per-connection server daemon (139.178.68.195:35050). Nov 6 05:56:00.850118 containerd[1638]: time="2025-11-06T05:56:00.850025715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:00.851202 containerd[1638]: time="2025-11-06T05:56:00.851160487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:00.852328 containerd[1638]: time="2025-11-06T05:56:00.852211405Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:56:00.853086 kubelet[2932]: E1106 05:56:00.852702 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:56:00.853086 kubelet[2932]: E1106 05:56:00.852775 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:56:00.853086 kubelet[2932]: E1106 05:56:00.852984 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flp8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:00.858253 containerd[1638]: time="2025-11-06T05:56:00.858192973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:56:01.169278 containerd[1638]: time="2025-11-06T05:56:01.169123032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:01.170813 containerd[1638]: time="2025-11-06T05:56:01.170736745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:56:01.170906 containerd[1638]: time="2025-11-06T05:56:01.170862774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:01.171255 kubelet[2932]: E1106 05:56:01.171169 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:56:01.172377 kubelet[2932]: E1106 05:56:01.171266 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:56:01.172377 kubelet[2932]: E1106 05:56:01.171499 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flp8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:01.173280 kubelet[2932]: E1106 05:56:01.172760 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:56:01.521291 kubelet[2932]: E1106 05:56:01.521064 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:56:01.647371 sshd[5193]: Accepted publickey for core from 139.178.68.195 port 35050 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:01.649556 sshd-session[5193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:01.658841 systemd-logind[1611]: New session 13 of user core. Nov 6 05:56:01.665463 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 05:56:02.250660 sshd[5196]: Connection closed by 139.178.68.195 port 35050 Nov 6 05:56:02.250468 sshd-session[5193]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:02.259729 systemd[1]: sshd@10-10.230.27.98:22-139.178.68.195:35050.service: Deactivated successfully. Nov 6 05:56:02.264704 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 05:56:02.267054 systemd-logind[1611]: Session 13 logged out. Waiting for processes to exit. Nov 6 05:56:02.269805 systemd-logind[1611]: Removed session 13. Nov 6 05:56:03.523558 kubelet[2932]: E1106 05:56:03.523478 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:56:07.409685 systemd[1]: Started sshd@11-10.230.27.98:22-139.178.68.195:38938.service - OpenSSH per-connection server daemon (139.178.68.195:38938). Nov 6 05:56:08.281337 sshd[5233]: Accepted publickey for core from 139.178.68.195 port 38938 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:08.283991 sshd-session[5233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:08.292906 systemd-logind[1611]: New session 14 of user core. Nov 6 05:56:08.300446 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 05:56:08.862025 sshd[5240]: Connection closed by 139.178.68.195 port 38938 Nov 6 05:56:08.862996 sshd-session[5233]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:08.869189 systemd[1]: sshd@11-10.230.27.98:22-139.178.68.195:38938.service: Deactivated successfully. Nov 6 05:56:08.873858 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 05:56:08.876805 systemd-logind[1611]: Session 14 logged out. Waiting for processes to exit. Nov 6 05:56:08.879556 systemd-logind[1611]: Removed session 14. Nov 6 05:56:09.030757 systemd[1]: Started sshd@12-10.230.27.98:22-139.178.68.195:38942.service - OpenSSH per-connection server daemon (139.178.68.195:38942). Nov 6 05:56:09.819305 sshd[5253]: Accepted publickey for core from 139.178.68.195 port 38942 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:09.821273 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:09.828862 systemd-logind[1611]: New session 15 of user core. Nov 6 05:56:09.838548 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 05:56:10.464382 sshd[5256]: Connection closed by 139.178.68.195 port 38942 Nov 6 05:56:10.465602 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:10.473391 systemd-logind[1611]: Session 15 logged out. Waiting for processes to exit. Nov 6 05:56:10.473658 systemd[1]: sshd@12-10.230.27.98:22-139.178.68.195:38942.service: Deactivated successfully. Nov 6 05:56:10.476817 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 05:56:10.480490 systemd-logind[1611]: Removed session 15. Nov 6 05:56:10.521436 kubelet[2932]: E1106 05:56:10.520970 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:56:10.627318 systemd[1]: Started sshd@13-10.230.27.98:22-139.178.68.195:38954.service - OpenSSH per-connection server daemon (139.178.68.195:38954). Nov 6 05:56:11.434510 sshd[5265]: Accepted publickey for core from 139.178.68.195 port 38954 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:11.435869 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:11.445870 systemd-logind[1611]: New session 16 of user core. Nov 6 05:56:11.454517 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 05:56:11.524580 kubelet[2932]: E1106 05:56:11.524498 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:56:12.003770 sshd[5268]: Connection closed by 139.178.68.195 port 38954 Nov 6 05:56:12.003455 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:12.009470 systemd-logind[1611]: Session 16 logged out. Waiting for processes to exit. Nov 6 05:56:12.010270 systemd[1]: sshd@13-10.230.27.98:22-139.178.68.195:38954.service: Deactivated successfully. Nov 6 05:56:12.014927 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 05:56:12.018764 systemd-logind[1611]: Removed session 16. Nov 6 05:56:14.521584 kubelet[2932]: E1106 05:56:14.521503 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:56:15.523638 containerd[1638]: time="2025-11-06T05:56:15.523555057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:56:15.866779 containerd[1638]: time="2025-11-06T05:56:15.866214326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:15.867983 containerd[1638]: time="2025-11-06T05:56:15.867901263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:56:15.868119 containerd[1638]: time="2025-11-06T05:56:15.868084357Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:15.868675 kubelet[2932]: E1106 05:56:15.868586 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:56:15.868675 kubelet[2932]: E1106 05:56:15.868670 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:56:15.871781 kubelet[2932]: E1106 05:56:15.871691 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xltmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:15.873664 kubelet[2932]: E1106 05:56:15.873442 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:56:16.527129 kubelet[2932]: E1106 05:56:16.526970 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:56:17.165492 systemd[1]: Started sshd@14-10.230.27.98:22-139.178.68.195:51152.service - OpenSSH per-connection server daemon (139.178.68.195:51152). Nov 6 05:56:17.971108 sshd[5292]: Accepted publickey for core from 139.178.68.195 port 51152 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:17.973602 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:17.981586 systemd-logind[1611]: New session 17 of user core. Nov 6 05:56:17.986358 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 05:56:18.522578 containerd[1638]: time="2025-11-06T05:56:18.522329574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:56:18.529785 sshd[5295]: Connection closed by 139.178.68.195 port 51152 Nov 6 05:56:18.530423 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:18.540429 systemd[1]: sshd@14-10.230.27.98:22-139.178.68.195:51152.service: Deactivated successfully. Nov 6 05:56:18.545728 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 05:56:18.548449 systemd-logind[1611]: Session 17 logged out. Waiting for processes to exit. Nov 6 05:56:18.551948 systemd-logind[1611]: Removed session 17. Nov 6 05:56:18.848980 containerd[1638]: time="2025-11-06T05:56:18.848727466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:18.850226 containerd[1638]: time="2025-11-06T05:56:18.850102414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:56:18.850373 containerd[1638]: time="2025-11-06T05:56:18.850184362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:18.851168 kubelet[2932]: E1106 05:56:18.850723 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:56:18.851168 kubelet[2932]: E1106 05:56:18.850955 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:56:18.851921 kubelet[2932]: E1106 05:56:18.851850 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:701edbcabe75434aab47d246a6d809dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:18.857215 containerd[1638]: time="2025-11-06T05:56:18.856772530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:56:19.176450 containerd[1638]: time="2025-11-06T05:56:19.175640714Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:19.177914 containerd[1638]: time="2025-11-06T05:56:19.177822024Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:56:19.178162 containerd[1638]: time="2025-11-06T05:56:19.177900589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:19.178597 kubelet[2932]: E1106 05:56:19.178528 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:56:19.178784 kubelet[2932]: E1106 05:56:19.178614 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:56:19.179252 kubelet[2932]: E1106 05:56:19.178857 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:19.180126 kubelet[2932]: E1106 05:56:19.180036 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:56:23.689307 systemd[1]: Started sshd@15-10.230.27.98:22-139.178.68.195:33256.service - OpenSSH per-connection server daemon (139.178.68.195:33256). Nov 6 05:56:24.502003 sshd[5312]: Accepted publickey for core from 139.178.68.195 port 33256 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:24.504184 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:24.512276 systemd-logind[1611]: New session 18 of user core. Nov 6 05:56:24.520716 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 05:56:24.529449 containerd[1638]: time="2025-11-06T05:56:24.529397002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 6 05:56:24.857872 containerd[1638]: time="2025-11-06T05:56:24.857175265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:24.859182 containerd[1638]: time="2025-11-06T05:56:24.858781990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 6 05:56:24.860339 containerd[1638]: time="2025-11-06T05:56:24.858816786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:24.860727 kubelet[2932]: E1106 05:56:24.860651 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:56:24.861367 kubelet[2932]: E1106 05:56:24.860742 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 6 05:56:24.861367 kubelet[2932]: E1106 05:56:24.860999 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68xk5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-4vklp_calico-system(9a7a74a3-52cd-4b77-ac72-984211b63b0e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:24.863331 kubelet[2932]: E1106 05:56:24.862213 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:56:25.084101 sshd[5315]: Connection closed by 139.178.68.195 port 33256 Nov 6 05:56:25.085723 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:25.092692 systemd[1]: sshd@15-10.230.27.98:22-139.178.68.195:33256.service: Deactivated successfully. Nov 6 05:56:25.096549 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 05:56:25.099203 systemd-logind[1611]: Session 18 logged out. Waiting for processes to exit. Nov 6 05:56:25.103012 systemd-logind[1611]: Removed session 18. Nov 6 05:56:25.523305 containerd[1638]: time="2025-11-06T05:56:25.523173722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 6 05:56:25.845285 containerd[1638]: time="2025-11-06T05:56:25.844919841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:25.847291 containerd[1638]: time="2025-11-06T05:56:25.847239355Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 6 05:56:25.847407 containerd[1638]: time="2025-11-06T05:56:25.847357798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:25.848199 kubelet[2932]: E1106 05:56:25.847706 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:56:25.848199 kubelet[2932]: E1106 05:56:25.847850 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 6 05:56:25.848463 kubelet[2932]: E1106 05:56:25.848395 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9w4wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-859c8f8bcd-gw95r_calico-system(5513e9ee-51f3-4098-ad93-0cffcda4f037): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:25.849898 kubelet[2932]: E1106 05:56:25.849826 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:56:28.524485 containerd[1638]: time="2025-11-06T05:56:28.523856499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 6 05:56:28.843818 containerd[1638]: time="2025-11-06T05:56:28.843380163Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:28.846008 containerd[1638]: time="2025-11-06T05:56:28.845300307Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 6 05:56:28.846008 containerd[1638]: time="2025-11-06T05:56:28.845328906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:28.846217 kubelet[2932]: E1106 05:56:28.845872 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:56:28.846217 kubelet[2932]: E1106 05:56:28.846127 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 6 05:56:28.848593 kubelet[2932]: E1106 05:56:28.846491 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flp8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:28.857459 containerd[1638]: time="2025-11-06T05:56:28.857386353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 6 05:56:29.166422 containerd[1638]: time="2025-11-06T05:56:29.166327633Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:29.167904 containerd[1638]: time="2025-11-06T05:56:29.167833795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 6 05:56:29.168293 containerd[1638]: time="2025-11-06T05:56:29.167951621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:29.168818 kubelet[2932]: E1106 05:56:29.168728 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:56:29.168898 kubelet[2932]: E1106 05:56:29.168835 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 6 05:56:29.169201 kubelet[2932]: E1106 05:56:29.169060 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flp8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-298gz_calico-system(9b7c3e80-078e-47aa-8574-17ccfe24f839): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:29.170715 kubelet[2932]: E1106 05:56:29.170645 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:56:29.523192 containerd[1638]: time="2025-11-06T05:56:29.522449704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:56:29.523921 kubelet[2932]: E1106 05:56:29.522516 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:56:29.838067 containerd[1638]: time="2025-11-06T05:56:29.837815637Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:29.839940 containerd[1638]: time="2025-11-06T05:56:29.839882827Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:56:29.841195 containerd[1638]: time="2025-11-06T05:56:29.840024097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:29.842226 kubelet[2932]: E1106 05:56:29.840209 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:56:29.842226 kubelet[2932]: E1106 05:56:29.840309 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:56:29.842226 kubelet[2932]: E1106 05:56:29.840523 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b9hmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-zmk4q_calico-apiserver(da766196-4491-43f8-a7f4-97b6b2ff4f0a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:29.842515 kubelet[2932]: E1106 05:56:29.842251 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:56:30.249777 systemd[1]: Started sshd@16-10.230.27.98:22-139.178.68.195:33260.service - OpenSSH per-connection server daemon (139.178.68.195:33260). Nov 6 05:56:31.079640 sshd[5330]: Accepted publickey for core from 139.178.68.195 port 33260 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:31.083054 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:31.091957 systemd-logind[1611]: New session 19 of user core. Nov 6 05:56:31.096346 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 05:56:31.680694 sshd[5333]: Connection closed by 139.178.68.195 port 33260 Nov 6 05:56:31.681869 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:31.691294 systemd-logind[1611]: Session 19 logged out. Waiting for processes to exit. Nov 6 05:56:31.691977 systemd[1]: sshd@16-10.230.27.98:22-139.178.68.195:33260.service: Deactivated successfully. Nov 6 05:56:31.696531 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 05:56:31.701729 systemd-logind[1611]: Removed session 19. Nov 6 05:56:31.843255 systemd[1]: Started sshd@17-10.230.27.98:22-139.178.68.195:33272.service - OpenSSH per-connection server daemon (139.178.68.195:33272). Nov 6 05:56:32.666837 sshd[5344]: Accepted publickey for core from 139.178.68.195 port 33272 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:32.668964 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:32.677773 systemd-logind[1611]: New session 20 of user core. Nov 6 05:56:32.683438 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 05:56:33.683643 sshd[5347]: Connection closed by 139.178.68.195 port 33272 Nov 6 05:56:33.684883 sshd-session[5344]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:33.694497 systemd[1]: sshd@17-10.230.27.98:22-139.178.68.195:33272.service: Deactivated successfully. Nov 6 05:56:33.698963 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 05:56:33.702061 systemd-logind[1611]: Session 20 logged out. Waiting for processes to exit. Nov 6 05:56:33.705013 systemd-logind[1611]: Removed session 20. Nov 6 05:56:33.841995 systemd[1]: Started sshd@18-10.230.27.98:22-139.178.68.195:56736.service - OpenSSH per-connection server daemon (139.178.68.195:56736). Nov 6 05:56:34.529079 kubelet[2932]: E1106 05:56:34.529010 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:56:34.680031 sshd[5357]: Accepted publickey for core from 139.178.68.195 port 56736 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:34.682831 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:34.691214 systemd-logind[1611]: New session 21 of user core. Nov 6 05:56:34.696443 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 05:56:35.970948 sshd[5360]: Connection closed by 139.178.68.195 port 56736 Nov 6 05:56:35.972940 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:35.980357 systemd[1]: sshd@18-10.230.27.98:22-139.178.68.195:56736.service: Deactivated successfully. Nov 6 05:56:35.985856 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 05:56:35.989506 systemd-logind[1611]: Session 21 logged out. Waiting for processes to exit. Nov 6 05:56:35.993859 systemd-logind[1611]: Removed session 21. Nov 6 05:56:36.134937 systemd[1]: Started sshd@19-10.230.27.98:22-139.178.68.195:56742.service - OpenSSH per-connection server daemon (139.178.68.195:56742). Nov 6 05:56:36.954185 sshd[5404]: Accepted publickey for core from 139.178.68.195 port 56742 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:36.956258 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:36.963709 systemd-logind[1611]: New session 22 of user core. Nov 6 05:56:36.969346 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 05:56:37.521597 kubelet[2932]: E1106 05:56:37.521490 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:56:37.526721 kubelet[2932]: E1106 05:56:37.526607 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:56:37.853690 sshd[5409]: Connection closed by 139.178.68.195 port 56742 Nov 6 05:56:37.854669 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:37.860766 systemd[1]: sshd@19-10.230.27.98:22-139.178.68.195:56742.service: Deactivated successfully. Nov 6 05:56:37.864655 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 05:56:37.867346 systemd-logind[1611]: Session 22 logged out. Waiting for processes to exit. Nov 6 05:56:37.870797 systemd-logind[1611]: Removed session 22. Nov 6 05:56:38.015470 systemd[1]: Started sshd@20-10.230.27.98:22-139.178.68.195:56750.service - OpenSSH per-connection server daemon (139.178.68.195:56750). Nov 6 05:56:38.828532 sshd[5419]: Accepted publickey for core from 139.178.68.195 port 56750 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:38.830430 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:38.838192 systemd-logind[1611]: New session 23 of user core. Nov 6 05:56:38.846403 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 05:56:39.373641 sshd[5422]: Connection closed by 139.178.68.195 port 56750 Nov 6 05:56:39.374572 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:39.380356 systemd[1]: sshd@20-10.230.27.98:22-139.178.68.195:56750.service: Deactivated successfully. Nov 6 05:56:39.383183 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 05:56:39.385799 systemd-logind[1611]: Session 23 logged out. Waiting for processes to exit. Nov 6 05:56:39.388317 systemd-logind[1611]: Removed session 23. Nov 6 05:56:41.531258 kubelet[2932]: E1106 05:56:41.528541 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:56:43.523150 kubelet[2932]: E1106 05:56:43.522988 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:56:44.522211 kubelet[2932]: E1106 05:56:44.520287 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:56:44.538643 systemd[1]: Started sshd@21-10.230.27.98:22-139.178.68.195:35416.service - OpenSSH per-connection server daemon (139.178.68.195:35416). Nov 6 05:56:45.351067 sshd[5435]: Accepted publickey for core from 139.178.68.195 port 35416 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:45.351847 sshd-session[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:45.360338 systemd-logind[1611]: New session 24 of user core. Nov 6 05:56:45.371428 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 05:56:45.948401 sshd[5440]: Connection closed by 139.178.68.195 port 35416 Nov 6 05:56:45.946915 sshd-session[5435]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:45.955456 systemd[1]: sshd@21-10.230.27.98:22-139.178.68.195:35416.service: Deactivated successfully. Nov 6 05:56:45.959949 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 05:56:45.963312 systemd-logind[1611]: Session 24 logged out. Waiting for processes to exit. Nov 6 05:56:45.966751 systemd-logind[1611]: Removed session 24. Nov 6 05:56:48.520787 kubelet[2932]: E1106 05:56:48.520651 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:56:49.523509 kubelet[2932]: E1106 05:56:49.523390 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037" Nov 6 05:56:50.522297 kubelet[2932]: E1106 05:56:50.521833 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:56:51.118826 systemd[1]: Started sshd@22-10.230.27.98:22-139.178.68.195:35432.service - OpenSSH per-connection server daemon (139.178.68.195:35432). Nov 6 05:56:51.997242 sshd[5453]: Accepted publickey for core from 139.178.68.195 port 35432 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:52.000484 sshd-session[5453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:52.013365 systemd-logind[1611]: New session 25 of user core. Nov 6 05:56:52.018469 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 05:56:52.535620 kubelet[2932]: E1106 05:56:52.535417 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-298gz" podUID="9b7c3e80-078e-47aa-8574-17ccfe24f839" Nov 6 05:56:52.715421 sshd[5456]: Connection closed by 139.178.68.195 port 35432 Nov 6 05:56:52.715920 sshd-session[5453]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:52.724986 systemd[1]: sshd@22-10.230.27.98:22-139.178.68.195:35432.service: Deactivated successfully. Nov 6 05:56:52.729601 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 05:56:52.731388 systemd-logind[1611]: Session 25 logged out. Waiting for processes to exit. Nov 6 05:56:52.734172 systemd-logind[1611]: Removed session 25. Nov 6 05:56:57.521581 kubelet[2932]: E1106 05:56:57.521310 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-zmk4q" podUID="da766196-4491-43f8-a7f4-97b6b2ff4f0a" Nov 6 05:56:57.882649 systemd[1]: Started sshd@23-10.230.27.98:22-139.178.68.195:39514.service - OpenSSH per-connection server daemon (139.178.68.195:39514). Nov 6 05:56:58.712715 sshd[5467]: Accepted publickey for core from 139.178.68.195 port 39514 ssh2: RSA SHA256:BfHYK8RQeJdQN5xl3BsDIxBMbYNyZVYlC0Yheg1aMu0 Nov 6 05:56:58.715600 sshd-session[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 05:56:58.728006 systemd-logind[1611]: New session 26 of user core. Nov 6 05:56:58.734361 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 05:56:59.380198 sshd[5476]: Connection closed by 139.178.68.195 port 39514 Nov 6 05:56:59.381442 sshd-session[5467]: pam_unix(sshd:session): session closed for user core Nov 6 05:56:59.394447 systemd[1]: sshd@23-10.230.27.98:22-139.178.68.195:39514.service: Deactivated successfully. Nov 6 05:56:59.407744 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 05:56:59.412820 systemd-logind[1611]: Session 26 logged out. Waiting for processes to exit. Nov 6 05:56:59.417709 systemd-logind[1611]: Removed session 26. Nov 6 05:56:59.523511 containerd[1638]: time="2025-11-06T05:56:59.523149270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 6 05:56:59.841209 containerd[1638]: time="2025-11-06T05:56:59.840880453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:56:59.844037 containerd[1638]: time="2025-11-06T05:56:59.843873529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 6 05:56:59.845163 containerd[1638]: time="2025-11-06T05:56:59.843917942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 6 05:56:59.845650 kubelet[2932]: E1106 05:56:59.845570 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:56:59.846957 kubelet[2932]: E1106 05:56:59.846334 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 6 05:56:59.846957 kubelet[2932]: E1106 05:56:59.846724 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:701edbcabe75434aab47d246a6d809dc,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 6 05:56:59.848319 containerd[1638]: time="2025-11-06T05:56:59.848282383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 6 05:57:00.162194 containerd[1638]: time="2025-11-06T05:57:00.161931223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:57:00.164379 containerd[1638]: time="2025-11-06T05:57:00.164186306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 6 05:57:00.164379 containerd[1638]: time="2025-11-06T05:57:00.164257181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 6 05:57:00.165316 kubelet[2932]: E1106 05:57:00.164577 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:57:00.165316 kubelet[2932]: E1106 05:57:00.164659 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 6 05:57:00.165316 kubelet[2932]: E1106 05:57:00.165102 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xltmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-8cd67c9c6-hw8f5_calico-apiserver(6eba05f7-0eaa-45c3-8192-73bb69abd3a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 6 05:57:00.167541 kubelet[2932]: E1106 05:57:00.166675 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-8cd67c9c6-hw8f5" podUID="6eba05f7-0eaa-45c3-8192-73bb69abd3a6" Nov 6 05:57:00.167856 containerd[1638]: time="2025-11-06T05:57:00.166566903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 6 05:57:00.483127 containerd[1638]: time="2025-11-06T05:57:00.482901216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 6 05:57:00.486154 containerd[1638]: time="2025-11-06T05:57:00.484308653Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 6 05:57:00.486770 containerd[1638]: time="2025-11-06T05:57:00.484418594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 6 05:57:00.486847 kubelet[2932]: E1106 05:57:00.486543 2932 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:57:00.486847 kubelet[2932]: E1106 05:57:00.486625 2932 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 6 05:57:00.492994 kubelet[2932]: E1106 05:57:00.492293 2932 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-msgdf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-859b8f77d-fjqls_calico-system(09d71414-7028-4395-b830-79141d516415): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 6 05:57:00.494159 kubelet[2932]: E1106 05:57:00.494056 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-859b8f77d-fjqls" podUID="09d71414-7028-4395-b830-79141d516415" Nov 6 05:57:01.521657 kubelet[2932]: E1106 05:57:01.521515 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-4vklp" podUID="9a7a74a3-52cd-4b77-ac72-984211b63b0e" Nov 6 05:57:02.524425 kubelet[2932]: E1106 05:57:02.524350 2932 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-859c8f8bcd-gw95r" podUID="5513e9ee-51f3-4098-ad93-0cffcda4f037"