Jan 23 19:54:24.950725 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 19:54:24.950759 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:54:24.950773 kernel: BIOS-provided physical RAM map: Jan 23 19:54:24.950783 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 23 19:54:24.950797 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 23 19:54:24.950807 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 23 19:54:24.950819 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 23 19:54:24.950836 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 23 19:54:24.950846 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 23 19:54:24.950856 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 23 19:54:24.950867 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 19:54:24.950877 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 23 19:54:24.950887 kernel: NX (Execute Disable) protection: active Jan 23 19:54:24.950902 kernel: APIC: Static calls initialized Jan 23 19:54:24.950914 kernel: SMBIOS 2.8 present. Jan 23 19:54:24.950925 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 23 19:54:24.950954 kernel: DMI: Memory slots populated: 1/1 Jan 23 19:54:24.950974 kernel: Hypervisor detected: KVM Jan 23 19:54:24.950985 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 19:54:24.951000 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 19:54:24.951011 kernel: kvm-clock: using sched offset of 6403816820 cycles Jan 23 19:54:24.951022 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 19:54:24.951033 kernel: tsc: Detected 2799.998 MHz processor Jan 23 19:54:24.951044 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 19:54:24.951055 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 19:54:24.951066 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 23 19:54:24.951077 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 23 19:54:24.951088 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 19:54:24.951111 kernel: Using GB pages for direct mapping Jan 23 19:54:24.951122 kernel: ACPI: Early table checksum verification disabled Jan 23 19:54:24.951145 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 23 19:54:24.951164 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951175 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951185 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951196 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 23 19:54:24.951206 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951216 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951231 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951242 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 19:54:24.951253 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 23 19:54:24.951281 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 23 19:54:24.951292 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 23 19:54:24.951303 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 23 19:54:24.951318 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 23 19:54:24.951342 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 23 19:54:24.951353 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 23 19:54:24.951364 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 23 19:54:24.951376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 23 19:54:24.951387 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 23 19:54:24.951399 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 23 19:54:24.951410 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 23 19:54:24.953466 kernel: Zone ranges: Jan 23 19:54:24.953482 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 19:54:24.953495 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 23 19:54:24.953506 kernel: Normal empty Jan 23 19:54:24.953518 kernel: Device empty Jan 23 19:54:24.953529 kernel: Movable zone start for each node Jan 23 19:54:24.953541 kernel: Early memory node ranges Jan 23 19:54:24.953552 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 23 19:54:24.953563 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 23 19:54:24.953582 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 23 19:54:24.953594 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 19:54:24.953613 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 23 19:54:24.953625 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 23 19:54:24.953637 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 19:54:24.953652 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 19:54:24.953664 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 19:54:24.953676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 19:54:24.953687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 19:54:24.953699 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 19:54:24.953721 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 19:54:24.953733 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 19:54:24.953744 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 19:54:24.953767 kernel: TSC deadline timer available Jan 23 19:54:24.953778 kernel: CPU topo: Max. logical packages: 16 Jan 23 19:54:24.953788 kernel: CPU topo: Max. logical dies: 16 Jan 23 19:54:24.953799 kernel: CPU topo: Max. dies per package: 1 Jan 23 19:54:24.953810 kernel: CPU topo: Max. threads per core: 1 Jan 23 19:54:24.953820 kernel: CPU topo: Num. cores per package: 1 Jan 23 19:54:24.953836 kernel: CPU topo: Num. threads per package: 1 Jan 23 19:54:24.953846 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 23 19:54:24.953857 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 19:54:24.953868 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 23 19:54:24.953878 kernel: Booting paravirtualized kernel on KVM Jan 23 19:54:24.953901 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 19:54:24.953912 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 23 19:54:24.953923 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 23 19:54:24.953933 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 23 19:54:24.953948 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 23 19:54:24.953958 kernel: kvm-guest: PV spinlocks enabled Jan 23 19:54:24.953981 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 19:54:24.953994 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:54:24.954006 kernel: random: crng init done Jan 23 19:54:24.954016 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 19:54:24.954027 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 19:54:24.954038 kernel: Fallback order for Node 0: 0 Jan 23 19:54:24.954055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 23 19:54:24.954079 kernel: Policy zone: DMA32 Jan 23 19:54:24.954090 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 19:54:24.954101 kernel: software IO TLB: area num 16. Jan 23 19:54:24.954112 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 23 19:54:24.954123 kernel: Kernel/User page tables isolation: enabled Jan 23 19:54:24.954147 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 19:54:24.954158 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 19:54:24.954170 kernel: Dynamic Preempt: voluntary Jan 23 19:54:24.954186 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 19:54:24.954198 kernel: rcu: RCU event tracing is enabled. Jan 23 19:54:24.954210 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 23 19:54:24.954222 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 19:54:24.954243 kernel: Rude variant of Tasks RCU enabled. Jan 23 19:54:24.954255 kernel: Tracing variant of Tasks RCU enabled. Jan 23 19:54:24.954267 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 19:54:24.954278 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 23 19:54:24.954290 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 19:54:24.954312 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 19:54:24.954323 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 23 19:54:24.954335 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 23 19:54:24.954347 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 19:54:24.954377 kernel: Console: colour VGA+ 80x25 Jan 23 19:54:24.954393 kernel: printk: legacy console [tty0] enabled Jan 23 19:54:24.954405 kernel: printk: legacy console [ttyS0] enabled Jan 23 19:54:24.954417 kernel: ACPI: Core revision 20240827 Jan 23 19:54:24.954465 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 19:54:24.954478 kernel: x2apic enabled Jan 23 19:54:24.954490 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 19:54:24.954502 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 23 19:54:24.954521 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 23 19:54:24.954533 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 19:54:24.954545 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 19:54:24.954557 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 19:54:24.954569 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 19:54:24.954585 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 19:54:24.954597 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 19:54:24.954609 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 23 19:54:24.954621 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 19:54:24.954633 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 19:54:24.954645 kernel: MDS: Mitigation: Clear CPU buffers Jan 23 19:54:24.954656 kernel: MMIO Stale Data: Unknown: No mitigations Jan 23 19:54:24.954668 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 23 19:54:24.954680 kernel: active return thunk: its_return_thunk Jan 23 19:54:24.954692 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 19:54:24.954704 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 19:54:24.954720 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 19:54:24.954732 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 19:54:24.954744 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 19:54:24.954756 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 23 19:54:24.954768 kernel: Freeing SMP alternatives memory: 32K Jan 23 19:54:24.954779 kernel: pid_max: default: 32768 minimum: 301 Jan 23 19:54:24.954791 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 19:54:24.954803 kernel: landlock: Up and running. Jan 23 19:54:24.954819 kernel: SELinux: Initializing. Jan 23 19:54:24.954831 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 19:54:24.954843 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 23 19:54:24.954859 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 23 19:54:24.954872 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 23 19:54:24.954884 kernel: signal: max sigframe size: 1776 Jan 23 19:54:24.954901 kernel: rcu: Hierarchical SRCU implementation. Jan 23 19:54:24.954920 kernel: rcu: Max phase no-delay instances is 400. Jan 23 19:54:24.954932 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 23 19:54:24.954944 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 19:54:24.954956 kernel: smp: Bringing up secondary CPUs ... Jan 23 19:54:24.954968 kernel: smpboot: x86: Booting SMP configuration: Jan 23 19:54:24.954984 kernel: .... node #0, CPUs: #1 Jan 23 19:54:24.954996 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 19:54:24.955008 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 23 19:54:24.955021 kernel: Memory: 1887484K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 203116K reserved, 0K cma-reserved) Jan 23 19:54:24.955033 kernel: devtmpfs: initialized Jan 23 19:54:24.955046 kernel: x86/mm: Memory block size: 128MB Jan 23 19:54:24.955065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 19:54:24.955077 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 23 19:54:24.955089 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 19:54:24.955105 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 19:54:24.955117 kernel: audit: initializing netlink subsys (disabled) Jan 23 19:54:24.955129 kernel: audit: type=2000 audit(1769198061.439:1): state=initialized audit_enabled=0 res=1 Jan 23 19:54:24.955141 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 19:54:24.955153 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 19:54:24.955165 kernel: cpuidle: using governor menu Jan 23 19:54:24.955177 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 19:54:24.955189 kernel: dca service started, version 1.12.1 Jan 23 19:54:24.955201 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 23 19:54:24.955218 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 23 19:54:24.955230 kernel: PCI: Using configuration type 1 for base access Jan 23 19:54:24.955242 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 19:54:24.955254 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 19:54:24.955266 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 19:54:24.955278 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 19:54:24.955290 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 19:54:24.955302 kernel: ACPI: Added _OSI(Module Device) Jan 23 19:54:24.955314 kernel: ACPI: Added _OSI(Processor Device) Jan 23 19:54:24.955331 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 19:54:24.955343 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 19:54:24.955355 kernel: ACPI: Interpreter enabled Jan 23 19:54:24.955367 kernel: ACPI: PM: (supports S0 S5) Jan 23 19:54:24.955379 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 19:54:24.955391 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 19:54:24.955403 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 19:54:24.955415 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 19:54:24.957593 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 19:54:24.957915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 19:54:24.958073 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 19:54:24.958235 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 19:54:24.958261 kernel: PCI host bridge to bus 0000:00 Jan 23 19:54:24.958496 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 19:54:24.958651 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 19:54:24.958808 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 19:54:24.958978 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 23 19:54:24.959116 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 23 19:54:24.959264 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 23 19:54:24.959412 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 19:54:24.963711 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 19:54:24.963905 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 23 19:54:24.964100 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 23 19:54:24.964256 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 23 19:54:24.964471 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 23 19:54:24.964642 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 19:54:24.964832 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.965006 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 23 19:54:24.965176 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 19:54:24.965336 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 19:54:24.967865 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 19:54:24.968085 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.968245 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 23 19:54:24.968410 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 19:54:24.970216 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 19:54:24.970399 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 19:54:24.970637 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.970807 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 23 19:54:24.970973 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 19:54:24.971159 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 19:54:24.971347 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 19:54:24.971892 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.972060 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 23 19:54:24.972232 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 19:54:24.972414 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 19:54:24.972613 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 19:54:24.972802 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.972966 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 23 19:54:24.973126 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 19:54:24.973319 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 19:54:24.973580 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 19:54:24.973799 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.973965 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 23 19:54:24.974501 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 19:54:24.974677 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 19:54:24.974843 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 19:54:24.975026 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.975198 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 23 19:54:24.975359 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 19:54:24.975569 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 19:54:24.975743 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 19:54:24.975926 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 19:54:24.976088 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 23 19:54:24.976256 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 19:54:24.976415 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 19:54:24.976619 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 19:54:24.976810 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 23 19:54:24.976973 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 23 19:54:24.977134 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 23 19:54:24.977295 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 23 19:54:24.977495 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 23 19:54:24.977710 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 23 19:54:24.977875 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 23 19:54:24.978035 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 23 19:54:24.978198 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 23 19:54:24.978378 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 19:54:24.978979 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 19:54:24.979164 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 19:54:24.979327 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 23 19:54:24.979532 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 23 19:54:24.979713 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 19:54:24.979876 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 23 19:54:24.980061 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 23 19:54:24.980228 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 23 19:54:24.980402 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 19:54:24.980646 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 19:54:24.980811 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 19:54:24.981026 kernel: pci_bus 0000:02: extended config space not accessible Jan 23 19:54:24.981220 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 23 19:54:24.981393 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 23 19:54:24.983638 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 19:54:24.983867 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 19:54:24.984044 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 23 19:54:24.984209 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 19:54:24.984399 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 19:54:24.984603 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 23 19:54:24.984769 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 19:54:24.984943 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 19:54:24.985113 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 19:54:24.985297 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 19:54:24.985544 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 19:54:24.985710 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 19:54:24.985730 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 19:54:24.985743 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 19:54:24.985763 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 19:54:24.985776 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 19:54:24.985788 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 19:54:24.985801 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 19:54:24.985813 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 19:54:24.985825 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 19:54:24.985837 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 19:54:24.985850 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 19:54:24.985862 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 19:54:24.985878 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 19:54:24.985891 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 19:54:24.985903 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 19:54:24.985915 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 19:54:24.985928 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 19:54:24.985940 kernel: iommu: Default domain type: Translated Jan 23 19:54:24.985952 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 19:54:24.985964 kernel: PCI: Using ACPI for IRQ routing Jan 23 19:54:24.985977 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 19:54:24.985993 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 23 19:54:24.986006 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 23 19:54:24.986162 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 19:54:24.986320 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 19:54:24.986526 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 19:54:24.986547 kernel: vgaarb: loaded Jan 23 19:54:24.986559 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 19:54:24.986572 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 19:54:24.986591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 19:54:24.986604 kernel: pnp: PnP ACPI init Jan 23 19:54:24.986794 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 23 19:54:24.986815 kernel: pnp: PnP ACPI: found 5 devices Jan 23 19:54:24.986828 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 19:54:24.986841 kernel: NET: Registered PF_INET protocol family Jan 23 19:54:24.986853 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 19:54:24.986865 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 23 19:54:24.986878 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 19:54:24.986897 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 19:54:24.986910 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 23 19:54:24.986922 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 23 19:54:24.986935 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 19:54:24.986948 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 23 19:54:24.986960 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 19:54:24.986973 kernel: NET: Registered PF_XDP protocol family Jan 23 19:54:24.987130 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 23 19:54:24.987297 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 23 19:54:24.987532 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 23 19:54:24.987703 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 23 19:54:24.987864 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 23 19:54:24.988023 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 19:54:24.988183 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 19:54:24.988343 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 19:54:24.988531 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 23 19:54:24.988712 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 23 19:54:24.988905 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 23 19:54:24.989065 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 23 19:54:24.989223 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 23 19:54:24.989382 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 23 19:54:24.989645 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 23 19:54:24.989806 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 23 19:54:24.989971 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 23 19:54:24.990162 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 23 19:54:24.990322 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 23 19:54:24.990523 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 23 19:54:24.990684 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 23 19:54:24.990844 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 19:54:24.991003 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 23 19:54:24.991163 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 23 19:54:24.991324 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 23 19:54:24.991524 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 19:54:24.991694 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 23 19:54:24.991855 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 23 19:54:24.992014 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 23 19:54:24.992181 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 19:54:24.992342 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 23 19:54:24.992535 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 23 19:54:24.992705 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 23 19:54:24.992865 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 19:54:24.993024 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 23 19:54:24.993184 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 23 19:54:24.993343 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 23 19:54:24.993545 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 19:54:24.993707 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 23 19:54:24.993867 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 23 19:54:24.994027 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 23 19:54:24.994185 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 19:54:24.994346 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 23 19:54:24.994535 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 23 19:54:24.995096 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 23 19:54:24.995265 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 19:54:24.995468 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 23 19:54:24.995639 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 23 19:54:24.995799 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 23 19:54:24.995958 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 19:54:24.996117 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 19:54:24.996264 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 19:54:24.996409 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 19:54:24.996593 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 23 19:54:24.996740 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 23 19:54:24.996893 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 23 19:54:24.997076 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 23 19:54:24.997230 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 23 19:54:24.997380 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 23 19:54:24.997609 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 23 19:54:24.997792 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 23 19:54:24.997944 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 23 19:54:24.998102 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 23 19:54:24.998306 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 23 19:54:24.998525 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 23 19:54:24.998680 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 23 19:54:24.998850 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 23 19:54:24.999002 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 23 19:54:24.999159 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 23 19:54:24.999328 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 23 19:54:24.999510 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 23 19:54:24.999663 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 23 19:54:24.999831 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 23 19:54:24.999983 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 23 19:54:25.000132 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 23 19:54:25.000300 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 23 19:54:25.000486 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 23 19:54:25.000640 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 23 19:54:25.000800 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 23 19:54:25.000950 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 23 19:54:25.001111 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 23 19:54:25.001132 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 19:54:25.001152 kernel: PCI: CLS 0 bytes, default 64 Jan 23 19:54:25.001166 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 19:54:25.001178 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 23 19:54:25.001191 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 23 19:54:25.001205 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 23 19:54:25.001218 kernel: Initialise system trusted keyrings Jan 23 19:54:25.001231 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 23 19:54:25.001244 kernel: Key type asymmetric registered Jan 23 19:54:25.001257 kernel: Asymmetric key parser 'x509' registered Jan 23 19:54:25.001274 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 19:54:25.001287 kernel: io scheduler mq-deadline registered Jan 23 19:54:25.001299 kernel: io scheduler kyber registered Jan 23 19:54:25.001312 kernel: io scheduler bfq registered Jan 23 19:54:25.001515 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 19:54:25.001679 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 19:54:25.001839 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.002008 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 19:54:25.002168 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 19:54:25.002327 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.002519 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 19:54:25.002680 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 19:54:25.002841 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.003012 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 19:54:25.003173 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 19:54:25.003340 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.003531 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 19:54:25.003694 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 19:54:25.003855 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.004023 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 19:54:25.004184 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 19:54:25.004345 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.004538 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 19:54:25.004698 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 19:54:25.004859 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.005028 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 19:54:25.005187 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 19:54:25.005348 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 23 19:54:25.005368 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 19:54:25.005382 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 19:54:25.005395 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 23 19:54:25.005408 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 19:54:25.005446 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 19:54:25.005469 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 19:54:25.005482 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 19:54:25.005494 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 19:54:25.005512 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 19:54:25.005718 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 19:54:25.005874 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 19:54:25.006025 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T19:54:24 UTC (1769198064) Jan 23 19:54:25.006175 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 23 19:54:25.006201 kernel: intel_pstate: CPU model not supported Jan 23 19:54:25.006214 kernel: NET: Registered PF_INET6 protocol family Jan 23 19:54:25.006227 kernel: Segment Routing with IPv6 Jan 23 19:54:25.006240 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 19:54:25.006253 kernel: NET: Registered PF_PACKET protocol family Jan 23 19:54:25.006265 kernel: Key type dns_resolver registered Jan 23 19:54:25.006278 kernel: IPI shorthand broadcast: enabled Jan 23 19:54:25.006291 kernel: sched_clock: Marking stable (3554011769, 219594417)->(3919090215, -145484029) Jan 23 19:54:25.006303 kernel: registered taskstats version 1 Jan 23 19:54:25.006321 kernel: Loading compiled-in X.509 certificates Jan 23 19:54:25.006334 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 19:54:25.006346 kernel: Demotion targets for Node 0: null Jan 23 19:54:25.006359 kernel: Key type .fscrypt registered Jan 23 19:54:25.006371 kernel: Key type fscrypt-provisioning registered Jan 23 19:54:25.006384 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 19:54:25.006396 kernel: ima: Allocated hash algorithm: sha1 Jan 23 19:54:25.006409 kernel: ima: No architecture policies found Jan 23 19:54:25.006454 kernel: clk: Disabling unused clocks Jan 23 19:54:25.006476 kernel: Warning: unable to open an initial console. Jan 23 19:54:25.006489 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 19:54:25.006502 kernel: Write protecting the kernel read-only data: 40960k Jan 23 19:54:25.006515 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 19:54:25.006527 kernel: Run /init as init process Jan 23 19:54:25.006540 kernel: with arguments: Jan 23 19:54:25.006553 kernel: /init Jan 23 19:54:25.006566 kernel: with environment: Jan 23 19:54:25.006578 kernel: HOME=/ Jan 23 19:54:25.006595 kernel: TERM=linux Jan 23 19:54:25.006610 systemd[1]: Successfully made /usr/ read-only. Jan 23 19:54:25.006626 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:54:25.006640 systemd[1]: Detected virtualization kvm. Jan 23 19:54:25.006653 systemd[1]: Detected architecture x86-64. Jan 23 19:54:25.006666 systemd[1]: Running in initrd. Jan 23 19:54:25.006679 systemd[1]: No hostname configured, using default hostname. Jan 23 19:54:25.006698 systemd[1]: Hostname set to . Jan 23 19:54:25.006711 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:54:25.006725 systemd[1]: Queued start job for default target initrd.target. Jan 23 19:54:25.006738 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:54:25.006752 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:54:25.006767 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 19:54:25.006781 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:54:25.006795 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 19:54:25.006814 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 19:54:25.006829 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 19:54:25.006843 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 19:54:25.006856 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:54:25.006870 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:54:25.006884 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:54:25.006897 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:54:25.006915 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:54:25.006929 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:54:25.006943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:54:25.006956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:54:25.006970 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 19:54:25.006988 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 19:54:25.007009 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:54:25.007022 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:54:25.007036 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:54:25.007054 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:54:25.007069 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 19:54:25.007083 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:54:25.007099 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 19:54:25.007113 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 19:54:25.007127 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 19:54:25.007141 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:54:25.007155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:54:25.007177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:54:25.007191 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 19:54:25.007257 systemd-journald[210]: Collecting audit messages is disabled. Jan 23 19:54:25.007303 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:54:25.007317 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 19:54:25.007331 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:54:25.007345 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:54:25.007360 systemd-journald[210]: Journal started Jan 23 19:54:25.007389 systemd-journald[210]: Runtime Journal (/run/log/journal/ca9432ed3c6743678c13776e21cbc9a7) is 4.7M, max 37.8M, 33.1M free. Jan 23 19:54:24.983804 systemd-modules-load[213]: Inserted module 'overlay' Jan 23 19:54:25.079041 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 19:54:25.079075 kernel: Bridge firewalling registered Jan 23 19:54:25.079101 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:54:25.030828 systemd-modules-load[213]: Inserted module 'br_netfilter' Jan 23 19:54:25.078628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:54:25.079852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:54:25.083624 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 19:54:25.086583 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:54:25.089585 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:54:25.092251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:54:25.116723 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:54:25.119987 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 19:54:25.124690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:54:25.130824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:54:25.132017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:54:25.134608 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 19:54:25.138579 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:54:25.171003 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 19:54:25.193444 systemd-resolved[250]: Positive Trust Anchors: Jan 23 19:54:25.193463 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:54:25.193506 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:54:25.198842 systemd-resolved[250]: Defaulting to hostname 'linux'. Jan 23 19:54:25.200472 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:54:25.202987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:54:25.295453 kernel: SCSI subsystem initialized Jan 23 19:54:25.306458 kernel: Loading iSCSI transport class v2.0-870. Jan 23 19:54:25.318455 kernel: iscsi: registered transport (tcp) Jan 23 19:54:25.343453 kernel: iscsi: registered transport (qla4xxx) Jan 23 19:54:25.345491 kernel: QLogic iSCSI HBA Driver Jan 23 19:54:25.374721 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:54:25.397247 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:54:25.400025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:54:25.465199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 19:54:25.468370 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 19:54:25.533477 kernel: raid6: sse2x4 gen() 13495 MB/s Jan 23 19:54:25.551499 kernel: raid6: sse2x2 gen() 9282 MB/s Jan 23 19:54:25.570113 kernel: raid6: sse2x1 gen() 9314 MB/s Jan 23 19:54:25.570170 kernel: raid6: using algorithm sse2x4 gen() 13495 MB/s Jan 23 19:54:25.588990 kernel: raid6: .... xor() 7745 MB/s, rmw enabled Jan 23 19:54:25.589040 kernel: raid6: using ssse3x2 recovery algorithm Jan 23 19:54:25.614485 kernel: xor: automatically using best checksumming function avx Jan 23 19:54:25.801475 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 19:54:25.811274 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:54:25.814203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:54:25.846742 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 23 19:54:25.856234 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:54:25.860178 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 19:54:25.891457 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 23 19:54:25.926207 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:54:25.929147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:54:26.048148 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:54:26.052714 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 19:54:26.173450 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 23 19:54:26.188452 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 19:54:26.200472 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 23 19:54:26.208460 kernel: AES CTR mode by8 optimization enabled Jan 23 19:54:26.216164 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:54:26.216340 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:54:26.217417 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:54:26.220702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:54:26.235334 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 19:54:26.235475 kernel: GPT:17805311 != 125829119 Jan 23 19:54:26.235531 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 19:54:26.235550 kernel: GPT:17805311 != 125829119 Jan 23 19:54:26.235566 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 19:54:26.235582 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:54:26.227611 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:54:26.243456 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 19:54:26.288450 kernel: libata version 3.00 loaded. Jan 23 19:54:26.297458 kernel: ACPI: bus type USB registered Jan 23 19:54:26.297503 kernel: usbcore: registered new interface driver usbfs Jan 23 19:54:26.297522 kernel: usbcore: registered new interface driver hub Jan 23 19:54:26.300546 kernel: usbcore: registered new device driver usb Jan 23 19:54:26.379470 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 19:54:26.380061 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 19:54:26.383445 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 19:54:26.383691 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 19:54:26.383915 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 19:54:26.392772 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 23 19:54:26.450477 kernel: scsi host0: ahci Jan 23 19:54:26.450752 kernel: scsi host1: ahci Jan 23 19:54:26.450977 kernel: scsi host2: ahci Jan 23 19:54:26.451217 kernel: scsi host3: ahci Jan 23 19:54:26.451418 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 19:54:26.451728 kernel: scsi host4: ahci Jan 23 19:54:26.451967 kernel: scsi host5: ahci Jan 23 19:54:26.452174 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Jan 23 19:54:26.452207 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Jan 23 19:54:26.452223 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Jan 23 19:54:26.452265 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 23 19:54:26.452521 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Jan 23 19:54:26.452551 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 19:54:26.452784 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Jan 23 19:54:26.452833 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 23 19:54:26.453055 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Jan 23 19:54:26.453084 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 23 19:54:26.447762 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:54:26.464445 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 23 19:54:26.470445 kernel: hub 1-0:1.0: USB hub found Jan 23 19:54:26.474465 kernel: hub 1-0:1.0: 4 ports detected Jan 23 19:54:26.481500 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 19:54:26.482599 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 23 19:54:26.486346 kernel: hub 2-0:1.0: USB hub found Jan 23 19:54:26.486637 kernel: hub 2-0:1.0: 4 ports detected Jan 23 19:54:26.505871 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:54:26.516059 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 23 19:54:26.516834 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 23 19:54:26.520132 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 19:54:26.545447 disk-uuid[611]: Primary Header is updated. Jan 23 19:54:26.545447 disk-uuid[611]: Secondary Entries is updated. Jan 23 19:54:26.545447 disk-uuid[611]: Secondary Header is updated. Jan 23 19:54:26.551451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:54:26.559474 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:54:26.718564 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 19:54:26.747468 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 23 19:54:26.750807 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 19:54:26.750843 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 19:54:26.752297 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 19:54:26.754523 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 19:54:26.754561 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 19:54:26.867453 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 19:54:26.874670 kernel: usbcore: registered new interface driver usbhid Jan 23 19:54:26.874704 kernel: usbhid: USB HID core driver Jan 23 19:54:26.882151 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 19:54:26.882192 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 23 19:54:26.898463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 19:54:26.901262 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:54:26.903017 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:54:26.904651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:54:26.907600 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 19:54:26.934843 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:54:27.560487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 23 19:54:27.561632 disk-uuid[612]: The operation has completed successfully. Jan 23 19:54:27.629659 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 19:54:27.629869 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 19:54:27.677345 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 19:54:27.707709 sh[637]: Success Jan 23 19:54:27.735973 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 19:54:27.736094 kernel: device-mapper: uevent: version 1.0.3 Jan 23 19:54:27.737143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 19:54:27.755468 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jan 23 19:54:27.812286 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 19:54:27.814396 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 19:54:27.827077 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 19:54:27.844479 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (649) Jan 23 19:54:27.847461 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 19:54:27.850457 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:54:27.858588 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 19:54:27.858626 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 19:54:27.861487 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 19:54:27.862739 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:54:27.863827 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 19:54:27.864984 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 19:54:27.869504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 19:54:27.900469 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (682) Jan 23 19:54:27.905457 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:54:27.905490 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:54:27.911503 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:54:27.911541 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:54:27.919449 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:54:27.921168 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 19:54:27.924571 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 19:54:28.023077 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:54:28.027059 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:54:28.096048 systemd-networkd[818]: lo: Link UP Jan 23 19:54:28.096061 systemd-networkd[818]: lo: Gained carrier Jan 23 19:54:28.099133 systemd-networkd[818]: Enumeration completed Jan 23 19:54:28.099251 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:54:28.100795 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:54:28.100801 systemd-networkd[818]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:54:28.102025 systemd[1]: Reached target network.target - Network. Jan 23 19:54:28.104926 systemd-networkd[818]: eth0: Link UP Jan 23 19:54:28.105720 systemd-networkd[818]: eth0: Gained carrier Jan 23 19:54:28.105734 systemd-networkd[818]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:54:28.129534 systemd-networkd[818]: eth0: DHCPv4 address 10.230.78.134/30, gateway 10.230.78.133 acquired from 10.230.78.133 Jan 23 19:54:28.237461 ignition[735]: Ignition 2.22.0 Jan 23 19:54:28.238598 ignition[735]: Stage: fetch-offline Jan 23 19:54:28.238680 ignition[735]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:28.238699 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:28.243974 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:54:28.238890 ignition[735]: parsed url from cmdline: "" Jan 23 19:54:28.238897 ignition[735]: no config URL provided Jan 23 19:54:28.238906 ignition[735]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:54:28.238934 ignition[735]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:54:28.246633 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 19:54:28.238951 ignition[735]: failed to fetch config: resource requires networking Jan 23 19:54:28.239254 ignition[735]: Ignition finished successfully Jan 23 19:54:28.298234 ignition[829]: Ignition 2.22.0 Jan 23 19:54:28.299402 ignition[829]: Stage: fetch Jan 23 19:54:28.299605 ignition[829]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:28.299635 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:28.299774 ignition[829]: parsed url from cmdline: "" Jan 23 19:54:28.299780 ignition[829]: no config URL provided Jan 23 19:54:28.299791 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 19:54:28.299815 ignition[829]: no config at "/usr/lib/ignition/user.ign" Jan 23 19:54:28.299971 ignition[829]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 23 19:54:28.300000 ignition[829]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 23 19:54:28.301507 ignition[829]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 23 19:54:28.321272 ignition[829]: GET result: OK Jan 23 19:54:28.321810 ignition[829]: parsing config with SHA512: 904b8b8efa4f028c59f86541f293b18474079ad695dc9b45c932e50df00114f030fc5dfc1d5fbf5cb943735381ac8e0e4ecb901e771d4f1abd53744cc0631f36 Jan 23 19:54:28.326695 unknown[829]: fetched base config from "system" Jan 23 19:54:28.326711 unknown[829]: fetched base config from "system" Jan 23 19:54:28.327325 ignition[829]: fetch: fetch complete Jan 23 19:54:28.326719 unknown[829]: fetched user config from "openstack" Jan 23 19:54:28.327334 ignition[829]: fetch: fetch passed Jan 23 19:54:28.327413 ignition[829]: Ignition finished successfully Jan 23 19:54:28.332437 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 19:54:28.335588 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 19:54:28.398813 ignition[835]: Ignition 2.22.0 Jan 23 19:54:28.398837 ignition[835]: Stage: kargs Jan 23 19:54:28.399047 ignition[835]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:28.399065 ignition[835]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:28.400213 ignition[835]: kargs: kargs passed Jan 23 19:54:28.402755 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 19:54:28.400281 ignition[835]: Ignition finished successfully Jan 23 19:54:28.405628 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 19:54:28.456123 ignition[842]: Ignition 2.22.0 Jan 23 19:54:28.456157 ignition[842]: Stage: disks Jan 23 19:54:28.456394 ignition[842]: no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:28.456413 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:28.458885 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 19:54:28.457318 ignition[842]: disks: disks passed Jan 23 19:54:28.461576 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 19:54:28.457409 ignition[842]: Ignition finished successfully Jan 23 19:54:28.462521 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 19:54:28.463807 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:54:28.465361 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:54:28.466848 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:54:28.469558 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 19:54:28.499238 systemd-fsck[851]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 19:54:28.503797 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 19:54:28.506092 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 19:54:28.635751 kernel: EXT4-fs (vda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 19:54:28.636906 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 19:54:28.638161 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 19:54:28.640846 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:54:28.643513 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 19:54:28.644641 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 19:54:28.646581 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 23 19:54:28.649715 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 19:54:28.649775 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:54:28.661735 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 19:54:28.664096 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 19:54:28.675247 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Jan 23 19:54:28.678902 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:54:28.683596 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:54:28.688621 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:54:28.688686 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:54:28.692193 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:54:28.846474 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:28.871874 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 19:54:28.879232 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Jan 23 19:54:28.885448 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 19:54:28.892840 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 19:54:29.007655 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 19:54:29.011131 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 19:54:29.013570 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 19:54:29.033877 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 19:54:29.037737 kernel: BTRFS info (device vda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:54:29.061556 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 19:54:29.080010 ignition[978]: INFO : Ignition 2.22.0 Jan 23 19:54:29.080010 ignition[978]: INFO : Stage: mount Jan 23 19:54:29.081790 ignition[978]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:29.081790 ignition[978]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:29.081790 ignition[978]: INFO : mount: mount passed Jan 23 19:54:29.081790 ignition[978]: INFO : Ignition finished successfully Jan 23 19:54:29.083818 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 19:54:29.691772 systemd-networkd[818]: eth0: Gained IPv6LL Jan 23 19:54:29.876453 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:31.158165 systemd-networkd[818]: eth0: Ignoring DHCPv6 address 2a02:1348:179:93a1:24:19ff:fee6:4e86/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:93a1:24:19ff:fee6:4e86/64 assigned by NDisc. Jan 23 19:54:31.158181 systemd-networkd[818]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 19:54:31.888457 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:35.896544 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:35.904623 coreos-metadata[861]: Jan 23 19:54:35.904 WARN failed to locate config-drive, using the metadata service API instead Jan 23 19:54:35.928204 coreos-metadata[861]: Jan 23 19:54:35.928 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 19:54:35.941003 coreos-metadata[861]: Jan 23 19:54:35.940 INFO Fetch successful Jan 23 19:54:35.941807 coreos-metadata[861]: Jan 23 19:54:35.941 INFO wrote hostname srv-hs5p8.gb1.brightbox.com to /sysroot/etc/hostname Jan 23 19:54:35.944284 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 23 19:54:35.945709 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 23 19:54:35.949898 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 19:54:35.973389 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 19:54:35.999493 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Jan 23 19:54:36.005024 kernel: BTRFS info (device vda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 19:54:36.005056 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 19:54:36.011128 kernel: BTRFS info (device vda6): turning on async discard Jan 23 19:54:36.011182 kernel: BTRFS info (device vda6): enabling free space tree Jan 23 19:54:36.014543 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 19:54:36.056583 ignition[1011]: INFO : Ignition 2.22.0 Jan 23 19:54:36.056583 ignition[1011]: INFO : Stage: files Jan 23 19:54:36.058500 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:36.058500 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:36.058500 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Jan 23 19:54:36.061247 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 19:54:36.061247 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 19:54:36.068935 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 19:54:36.068935 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 19:54:36.068935 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 19:54:36.065686 unknown[1011]: wrote ssh authorized keys file for user: core Jan 23 19:54:36.072684 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 19:54:36.072684 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 23 19:54:36.261522 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 19:54:36.519680 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 23 19:54:36.519680 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:54:36.531339 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 19:54:36.539009 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:54:36.539009 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 19:54:36.539009 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:54:36.539009 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:54:36.539009 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:54:36.539009 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 23 19:54:36.863697 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 19:54:38.594459 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 23 19:54:38.594459 ignition[1011]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 19:54:38.597938 ignition[1011]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:54:38.599237 ignition[1011]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 19:54:38.599237 ignition[1011]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 19:54:38.599237 ignition[1011]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 19:54:38.599237 ignition[1011]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 19:54:38.604132 ignition[1011]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:54:38.604132 ignition[1011]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 19:54:38.604132 ignition[1011]: INFO : files: files passed Jan 23 19:54:38.604132 ignition[1011]: INFO : Ignition finished successfully Jan 23 19:54:38.603755 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 19:54:38.610675 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 19:54:38.614692 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 19:54:38.630675 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 19:54:38.631285 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 19:54:38.653321 initrd-setup-root-after-ignition[1040]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:54:38.653321 initrd-setup-root-after-ignition[1040]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:54:38.656191 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 19:54:38.659273 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:54:38.661585 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 19:54:38.664787 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 19:54:38.729207 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 19:54:38.729404 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 19:54:38.731148 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 19:54:38.732360 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 19:54:38.734056 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 19:54:38.736603 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 19:54:38.767100 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:54:38.769744 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 19:54:38.800647 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:54:38.802466 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:54:38.804333 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 19:54:38.805683 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 19:54:38.805890 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 19:54:38.808061 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 19:54:38.809086 systemd[1]: Stopped target basic.target - Basic System. Jan 23 19:54:38.810463 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 19:54:38.811669 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 19:54:38.813344 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 19:54:38.814841 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 19:54:38.816370 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 19:54:38.817733 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 19:54:38.819415 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 19:54:38.821024 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 19:54:38.822416 systemd[1]: Stopped target swap.target - Swaps. Jan 23 19:54:38.823624 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 19:54:38.823928 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 19:54:38.825361 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:54:38.826275 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:54:38.827798 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 19:54:38.828009 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:54:38.829224 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 19:54:38.829398 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 19:54:38.831287 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 19:54:38.831573 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 19:54:38.833367 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 19:54:38.833654 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 19:54:38.842619 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 19:54:38.845948 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 19:54:38.849217 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 19:54:38.849412 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:54:38.853768 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 19:54:38.853952 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 19:54:38.863128 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 19:54:38.864137 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 19:54:38.887157 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 19:54:38.907708 ignition[1064]: INFO : Ignition 2.22.0 Jan 23 19:54:38.907708 ignition[1064]: INFO : Stage: umount Jan 23 19:54:38.909501 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 19:54:38.909501 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 23 19:54:38.909501 ignition[1064]: INFO : umount: umount passed Jan 23 19:54:38.909501 ignition[1064]: INFO : Ignition finished successfully Jan 23 19:54:38.913394 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 19:54:38.913843 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 19:54:38.915555 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 19:54:38.915657 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 19:54:38.916644 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 19:54:38.916717 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 19:54:38.917988 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 19:54:38.918060 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 19:54:38.919238 systemd[1]: Stopped target network.target - Network. Jan 23 19:54:38.920449 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 19:54:38.920553 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 19:54:38.921857 systemd[1]: Stopped target paths.target - Path Units. Jan 23 19:54:38.923077 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 19:54:38.926562 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:54:38.927831 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 19:54:38.929292 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 19:54:38.931072 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 19:54:38.931181 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 19:54:38.932337 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 19:54:38.932409 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 19:54:38.933652 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 19:54:38.933742 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 19:54:38.935056 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 19:54:38.935138 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 19:54:38.936600 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 19:54:38.938366 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 19:54:38.941681 systemd-networkd[818]: eth0: DHCPv6 lease lost Jan 23 19:54:38.945687 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 19:54:38.945912 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 19:54:38.948320 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 19:54:38.948755 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 19:54:38.948948 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 19:54:38.953528 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 19:54:38.954348 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 19:54:38.955734 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 19:54:38.955819 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:54:38.958581 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 19:54:38.960747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 19:54:38.960836 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 19:54:38.962279 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 19:54:38.962361 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:54:38.966271 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 19:54:38.966341 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 19:54:38.967173 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 19:54:38.967237 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:54:38.968617 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:54:38.975028 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 19:54:38.975131 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:54:38.981043 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 19:54:38.981319 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:54:38.983561 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 19:54:38.983672 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 19:54:38.986550 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 19:54:38.986618 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:54:38.988957 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 19:54:38.989051 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 19:54:38.992091 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 19:54:38.992192 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 19:54:38.994243 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 19:54:38.994322 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 19:54:38.996611 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 19:54:38.998754 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 19:54:38.998839 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:54:39.001589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 19:54:39.001660 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:54:39.003373 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 19:54:39.003469 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:54:39.005727 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 19:54:39.005795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:54:39.007220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 19:54:39.007296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:54:39.016210 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 19:54:39.016289 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 19:54:39.016359 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 19:54:39.018465 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 19:54:39.019234 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 19:54:39.021474 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 19:54:39.026259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 19:54:39.026549 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 19:54:39.044874 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 19:54:39.045070 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 19:54:39.047477 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 19:54:39.048251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 19:54:39.048394 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 19:54:39.050956 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 19:54:39.073569 systemd[1]: Switching root. Jan 23 19:54:39.111506 systemd-journald[210]: Journal stopped Jan 23 19:54:40.802282 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Jan 23 19:54:40.802416 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 19:54:40.826096 kernel: SELinux: policy capability open_perms=1 Jan 23 19:54:40.826127 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 19:54:40.826159 kernel: SELinux: policy capability always_check_network=0 Jan 23 19:54:40.826180 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 19:54:40.826233 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 19:54:40.826268 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 19:54:40.826296 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 19:54:40.826316 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 19:54:40.826342 kernel: audit: type=1403 audit(1769198079.407:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 19:54:40.826385 systemd[1]: Successfully loaded SELinux policy in 77.331ms. Jan 23 19:54:40.826448 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.264ms. Jan 23 19:54:40.826486 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 19:54:40.826536 systemd[1]: Detected virtualization kvm. Jan 23 19:54:40.826558 systemd[1]: Detected architecture x86-64. Jan 23 19:54:40.826585 systemd[1]: Detected first boot. Jan 23 19:54:40.826613 systemd[1]: Hostname set to . Jan 23 19:54:40.826641 systemd[1]: Initializing machine ID from VM UUID. Jan 23 19:54:40.826682 zram_generator::config[1108]: No configuration found. Jan 23 19:54:40.826711 kernel: Guest personality initialized and is inactive Jan 23 19:54:40.826738 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 19:54:40.826758 kernel: Initialized host personality Jan 23 19:54:40.826790 kernel: NET: Registered PF_VSOCK protocol family Jan 23 19:54:40.826818 systemd[1]: Populated /etc with preset unit settings. Jan 23 19:54:40.826847 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 19:54:40.826871 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 19:54:40.826892 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 19:54:40.826911 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 19:54:40.826950 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 19:54:40.826983 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 19:54:40.827040 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 19:54:40.827086 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 19:54:40.827124 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 19:54:40.827188 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 19:54:40.827242 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 19:54:40.827277 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 19:54:40.827338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 19:54:40.827361 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 19:54:40.827381 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 19:54:40.827401 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 19:54:40.827421 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 19:54:40.842595 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 19:54:40.842653 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 19:54:40.842677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 19:54:40.842698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 19:54:40.842717 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 19:54:40.842737 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 19:54:40.842757 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 19:54:40.842790 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 19:54:40.842810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 19:54:40.842837 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 19:54:40.842907 systemd[1]: Reached target slices.target - Slice Units. Jan 23 19:54:40.842943 systemd[1]: Reached target swap.target - Swaps. Jan 23 19:54:40.842986 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 19:54:40.843027 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 19:54:40.843058 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 19:54:40.843096 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 19:54:40.843132 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 19:54:40.843153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 19:54:40.843183 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 19:54:40.843219 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 19:54:40.843242 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 19:54:40.843263 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 19:54:40.843283 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:40.843303 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 19:54:40.843323 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 19:54:40.843342 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 19:54:40.843384 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 19:54:40.843412 systemd[1]: Reached target machines.target - Containers. Jan 23 19:54:40.845411 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 19:54:40.845475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:54:40.845500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 19:54:40.845521 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 19:54:40.845541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:54:40.845561 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:54:40.845581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:54:40.845613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 19:54:40.845650 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:54:40.845672 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 19:54:40.845699 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 19:54:40.845721 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 19:54:40.845742 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 19:54:40.845763 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 19:54:40.845798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:54:40.845821 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 19:54:40.845855 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 19:54:40.845878 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 19:54:40.845900 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 19:54:40.845922 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 19:54:40.845956 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 19:54:40.845994 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 19:54:40.846029 systemd[1]: Stopped verity-setup.service. Jan 23 19:54:40.846053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:40.846084 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 19:54:40.846107 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 19:54:40.846142 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 19:54:40.846164 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 19:54:40.846184 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 19:54:40.846204 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 19:54:40.846225 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 19:54:40.846254 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 19:54:40.846277 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 19:54:40.846298 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 19:54:40.846334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:54:40.846369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:54:40.846391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:54:40.846410 kernel: loop: module loaded Jan 23 19:54:40.855131 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:54:40.855183 kernel: fuse: init (API version 7.41) Jan 23 19:54:40.855208 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 19:54:40.855229 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 19:54:40.855260 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:54:40.855301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:54:40.855324 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 19:54:40.855345 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 19:54:40.855365 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 19:54:40.861554 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 19:54:40.861604 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 19:54:40.861647 kernel: ACPI: bus type drm_connector registered Jan 23 19:54:40.861668 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 19:54:40.861688 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 19:54:40.861719 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 19:54:40.861754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:54:40.861774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 19:54:40.862286 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:54:40.862313 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 19:54:40.862334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:54:40.862421 systemd-journald[1198]: Collecting audit messages is disabled. Jan 23 19:54:40.864931 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 19:54:40.864962 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 19:54:40.864984 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:54:40.865005 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:54:40.865025 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 19:54:40.865047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 19:54:40.866874 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 19:54:40.866910 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 19:54:40.866932 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 19:54:40.866953 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 19:54:40.866998 systemd-journald[1198]: Journal started Jan 23 19:54:40.867032 systemd-journald[1198]: Runtime Journal (/run/log/journal/ca9432ed3c6743678c13776e21cbc9a7) is 4.7M, max 37.8M, 33.1M free. Jan 23 19:54:40.243831 systemd[1]: Queued start job for default target multi-user.target. Jan 23 19:54:40.881701 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 19:54:40.881765 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 19:54:40.881806 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 19:54:40.265321 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 23 19:54:40.266035 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 19:54:40.876326 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 19:54:40.880859 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 19:54:40.897047 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 19:54:40.965842 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 19:54:40.974628 systemd-journald[1198]: Time spent on flushing to /var/log/journal/ca9432ed3c6743678c13776e21cbc9a7 is 123.970ms for 1171 entries. Jan 23 19:54:40.974628 systemd-journald[1198]: System Journal (/var/log/journal/ca9432ed3c6743678c13776e21cbc9a7) is 8M, max 584.8M, 576.8M free. Jan 23 19:54:41.142864 systemd-journald[1198]: Received client request to flush runtime journal. Jan 23 19:54:41.142957 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 19:54:41.142999 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 19:54:41.077464 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 23 19:54:41.077488 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 23 19:54:41.095607 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 19:54:41.105758 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 19:54:41.136521 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 19:54:41.148197 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 19:54:41.150727 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 19:54:41.167479 kernel: loop2: detected capacity change from 0 to 8 Jan 23 19:54:41.223464 kernel: loop3: detected capacity change from 0 to 224512 Jan 23 19:54:41.269695 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 19:54:41.275482 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 19:54:41.296549 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 19:54:41.302767 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 19:54:41.350455 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 19:54:41.393513 kernel: loop6: detected capacity change from 0 to 8 Jan 23 19:54:41.433496 kernel: loop7: detected capacity change from 0 to 224512 Jan 23 19:54:41.447257 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 23 19:54:41.447284 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Jan 23 19:54:41.467065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 19:54:41.475893 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 23 19:54:41.478519 (sd-merge)[1272]: Merged extensions into '/usr'. Jan 23 19:54:41.492472 systemd[1]: Reload requested from client PID 1227 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 19:54:41.492498 systemd[1]: Reloading... Jan 23 19:54:41.749832 zram_generator::config[1302]: No configuration found. Jan 23 19:54:41.782466 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 19:54:42.069328 systemd[1]: Reloading finished in 572 ms. Jan 23 19:54:42.112026 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 19:54:42.115994 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 19:54:42.130850 systemd[1]: Starting ensure-sysext.service... Jan 23 19:54:42.133695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 19:54:42.166666 systemd[1]: Reload requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Jan 23 19:54:42.166699 systemd[1]: Reloading... Jan 23 19:54:42.197521 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 19:54:42.198866 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 19:54:42.199629 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 19:54:42.200353 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 19:54:42.201997 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 19:54:42.202672 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 23 19:54:42.202915 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 23 19:54:42.210832 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:54:42.210851 systemd-tmpfiles[1359]: Skipping /boot Jan 23 19:54:42.228048 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 19:54:42.228217 systemd-tmpfiles[1359]: Skipping /boot Jan 23 19:54:42.266455 zram_generator::config[1386]: No configuration found. Jan 23 19:54:42.534364 systemd[1]: Reloading finished in 366 ms. Jan 23 19:54:42.550784 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 19:54:42.566201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 19:54:42.576868 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:54:42.581713 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 19:54:42.592649 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 19:54:42.597113 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 19:54:42.600821 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 19:54:42.606833 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 19:54:42.613354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:42.613684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:54:42.617114 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:54:42.629120 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:54:42.637637 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 19:54:42.639639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:54:42.639813 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:54:42.639953 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:42.648088 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:42.648383 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:54:42.648664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:54:42.648802 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:54:42.659829 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 19:54:42.660615 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:42.663184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:54:42.663696 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:54:42.666247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:54:42.668655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:54:42.685175 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 19:54:42.686547 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 19:54:42.695983 systemd[1]: Finished ensure-sysext.service. Jan 23 19:54:42.698682 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 19:54:42.700126 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 19:54:42.707789 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:42.708242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 19:54:42.716316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 19:54:42.722711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 19:54:42.727522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 19:54:42.728857 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 19:54:42.728921 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 19:54:42.734721 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 19:54:42.736589 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 19:54:42.736639 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 19:54:42.749627 systemd-udevd[1448]: Using default interface naming scheme 'v255'. Jan 23 19:54:42.771345 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 19:54:42.778568 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 19:54:42.803276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 19:54:42.805070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 19:54:42.808045 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 19:54:42.811468 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 19:54:42.811801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 19:54:42.825540 augenrules[1486]: No rules Jan 23 19:54:42.823498 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 19:54:42.825578 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 19:54:42.827336 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:54:42.828748 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:54:42.830003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 19:54:42.831493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 19:54:42.832747 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 19:54:42.840284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 19:54:42.842042 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 19:54:43.112648 systemd-networkd[1500]: lo: Link UP Jan 23 19:54:43.112661 systemd-networkd[1500]: lo: Gained carrier Jan 23 19:54:43.114047 systemd-networkd[1500]: Enumeration completed Jan 23 19:54:43.114543 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 19:54:43.119732 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 19:54:43.126605 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 19:54:43.162939 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 19:54:43.163843 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 19:54:43.180495 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 19:54:43.188978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 23 19:54:43.190128 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 19:54:43.192899 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 19:54:43.223527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 19:54:43.235452 systemd-resolved[1447]: Positive Trust Anchors: Jan 23 19:54:43.235990 systemd-resolved[1447]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 19:54:43.236209 systemd-resolved[1447]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 19:54:43.248462 systemd-resolved[1447]: Using system hostname 'srv-hs5p8.gb1.brightbox.com'. Jan 23 19:54:43.252205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 19:54:43.253607 systemd[1]: Reached target network.target - Network. Jan 23 19:54:43.254271 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 19:54:43.254983 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 19:54:43.255779 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 19:54:43.257583 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 19:54:43.258524 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 19:54:43.259771 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 19:54:43.261636 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 19:54:43.262371 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 19:54:43.263130 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 19:54:43.263179 systemd[1]: Reached target paths.target - Path Units. Jan 23 19:54:43.264440 systemd[1]: Reached target timers.target - Timer Units. Jan 23 19:54:43.267697 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 19:54:43.271385 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 19:54:43.278415 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 19:54:43.279581 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 19:54:43.281221 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 19:54:43.289363 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 19:54:43.290567 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 19:54:43.293374 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 19:54:43.296446 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 19:54:43.298209 systemd[1]: Reached target basic.target - Basic System. Jan 23 19:54:43.299577 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:54:43.299644 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 19:54:43.302669 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 19:54:43.307816 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 19:54:43.311707 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 19:54:43.318805 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 19:54:43.321963 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 19:54:43.326132 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 19:54:43.326879 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 19:54:43.335702 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 19:54:43.345461 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:43.349964 jq[1546]: false Jan 23 19:54:43.359418 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:54:43.359841 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 19:54:43.363051 systemd-networkd[1500]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 19:54:43.367545 systemd-networkd[1500]: eth0: Link UP Jan 23 19:54:43.367818 systemd-networkd[1500]: eth0: Gained carrier Jan 23 19:54:43.367845 systemd-networkd[1500]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 19:54:43.380677 extend-filesystems[1547]: Found /dev/vda6 Jan 23 19:54:43.384678 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 19:54:43.388703 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 19:54:43.391447 extend-filesystems[1547]: Found /dev/vda9 Jan 23 19:54:43.394734 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 19:54:43.399760 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing passwd entry cache Jan 23 19:54:43.400168 extend-filesystems[1547]: Checking size of /dev/vda9 Jan 23 19:54:43.401147 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 19:54:43.402936 oslogin_cache_refresh[1548]: Refreshing passwd entry cache Jan 23 19:54:43.404246 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 19:54:43.405060 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 19:54:43.406800 systemd-networkd[1500]: eth0: DHCPv4 address 10.230.78.134/30, gateway 10.230.78.133 acquired from 10.230.78.133 Jan 23 19:54:43.411755 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 19:54:43.414542 systemd-timesyncd[1476]: Network configuration changed, trying to establish connection. Jan 23 19:54:43.421537 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 19:54:43.423256 extend-filesystems[1547]: Resized partition /dev/vda9 Jan 23 19:54:43.426932 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 19:54:43.430802 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting users, quitting Jan 23 19:54:43.430802 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:54:43.430802 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Refreshing group entry cache Jan 23 19:54:43.430072 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 19:54:43.429215 oslogin_cache_refresh[1548]: Failure getting users, quitting Jan 23 19:54:43.429242 oslogin_cache_refresh[1548]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 19:54:43.429356 oslogin_cache_refresh[1548]: Refreshing group entry cache Jan 23 19:54:43.435961 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 23 19:54:43.436026 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 19:54:43.431997 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 19:54:43.432086 oslogin_cache_refresh[1548]: Failure getting groups, quitting Jan 23 19:54:43.436235 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Failure getting groups, quitting Jan 23 19:54:43.436235 google_oslogin_nss_cache[1548]: oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:54:43.432518 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 19:54:43.432101 oslogin_cache_refresh[1548]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 19:54:43.435534 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 19:54:43.437524 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 19:54:43.489886 dbus-daemon[1544]: [system] SELinux support is enabled Jan 23 19:54:43.498423 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 19:54:43.503583 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 19:54:43.503623 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 19:54:43.504407 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 19:54:43.504432 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 19:54:43.507187 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 19:54:43.507575 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 19:54:43.519261 update_engine[1563]: I20260123 19:54:43.519120 1563 main.cc:92] Flatcar Update Engine starting Jan 23 19:54:43.521891 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 19:54:43.522242 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 19:54:43.531853 dbus-daemon[1544]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1500 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 19:54:43.533719 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 19:54:43.541483 update_engine[1563]: I20260123 19:54:43.538873 1563 update_check_scheduler.cc:74] Next update check in 4m29s Jan 23 19:54:43.541994 jq[1564]: true Jan 23 19:54:43.543257 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 19:54:43.553854 systemd[1]: Started update-engine.service - Update Engine. Jan 23 19:54:43.592217 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 19:54:43.594618 tar[1581]: linux-amd64/LICENSE Jan 23 19:54:43.596768 tar[1581]: linux-amd64/helm Jan 23 19:54:43.630585 jq[1591]: true Jan 23 19:54:44.465962 systemd-resolved[1447]: Clock change detected. Flushing caches. Jan 23 19:54:44.468169 systemd-timesyncd[1476]: Contacted time server 185.137.221.158:123 (0.flatcar.pool.ntp.org). Jan 23 19:54:44.468290 systemd-timesyncd[1476]: Initial clock synchronization to Fri 2026-01-23 19:54:44.464510 UTC. Jan 23 19:54:44.633426 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 19:54:44.649454 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 19:54:44.670260 systemd-logind[1561]: New seat seat0. Jan 23 19:54:44.675409 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 19:54:44.682276 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:54:44.686775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 19:54:44.695054 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 23 19:54:44.697614 systemd[1]: Starting sshkeys.service... Jan 23 19:54:44.699619 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 19:54:44.708854 kernel: ACPI: button: Power Button [PWRF] Jan 23 19:54:44.731913 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 23 19:54:44.731913 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 23 19:54:44.731913 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 23 19:54:44.745567 extend-filesystems[1547]: Resized filesystem in /dev/vda9 Jan 23 19:54:44.733774 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 19:54:44.735843 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 19:54:44.786174 containerd[1583]: time="2026-01-23T19:54:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 19:54:44.797033 containerd[1583]: time="2026-01-23T19:54:44.790691577Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 19:54:44.799183 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 19:54:44.818018 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.821234710Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="41.998µs" Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.821337395Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.821456381Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.823127479Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.823165493Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.823246582Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.823393849Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 19:54:44.826498 containerd[1583]: time="2026-01-23T19:54:44.823415933Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:54:44.845995 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:44.843944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.850897867Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.850948743Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.850987827Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.851004437Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.851231987Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.851663316Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.851723185Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.851753622Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.851851619Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.852164018Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 19:54:44.852876 containerd[1583]: time="2026-01-23T19:54:44.852273908Z" level=info msg="metadata content store policy set" policy=shared Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.856985040Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857068681Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857095237Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857116921Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857150987Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857183519Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857214450Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857236825Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857255275Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857271741Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857289639Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857318599Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857492286Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 19:54:44.858693 containerd[1583]: time="2026-01-23T19:54:44.857522328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857544465Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857565373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857602863Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857628950Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857648925Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857667069Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857696736Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857717893Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857734461Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.857853369Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.858919647Z" level=info msg="Start snapshots syncer" Jan 23 19:54:44.859165 containerd[1583]: time="2026-01-23T19:54:44.858960507Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 19:54:44.859532 containerd[1583]: time="2026-01-23T19:54:44.859289936Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 19:54:44.859532 containerd[1583]: time="2026-01-23T19:54:44.859366877Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859456726Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859615684Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859645302Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859670089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859689585Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859720015Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 19:54:44.859764 containerd[1583]: time="2026-01-23T19:54:44.859751021Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.859768334Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.859811394Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860705859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860746814Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860865624Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860904986Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860919545Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860934902Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860948097Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860962928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.860986632Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.861010046Z" level=info msg="runtime interface created" Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.861019568Z" level=info msg="created NRI interface" Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.861031157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 19:54:44.864194 containerd[1583]: time="2026-01-23T19:54:44.861049632Z" level=info msg="Connect containerd service" Jan 23 19:54:44.866321 containerd[1583]: time="2026-01-23T19:54:44.861076762Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 19:54:44.866321 containerd[1583]: time="2026-01-23T19:54:44.864488135Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 19:54:44.921193 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 19:54:45.015912 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 19:54:45.020037 dbus-daemon[1544]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 19:54:45.045573 dbus-daemon[1544]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1590 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 19:54:45.053469 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213346302Z" level=info msg="Start subscribing containerd event" Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213444863Z" level=info msg="Start recovering state" Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213567830Z" level=info msg="Start event monitor" Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213588183Z" level=info msg="Start cni network conf syncer for default" Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213599498Z" level=info msg="Start streaming server" Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213612240Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213622645Z" level=info msg="runtime interface starting up..." Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213631710Z" level=info msg="starting plugins..." Jan 23 19:54:45.213912 containerd[1583]: time="2026-01-23T19:54:45.213654484Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 19:54:45.217128 containerd[1583]: time="2026-01-23T19:54:45.214478634Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 19:54:45.217128 containerd[1583]: time="2026-01-23T19:54:45.214588516Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 19:54:45.217128 containerd[1583]: time="2026-01-23T19:54:45.214723057Z" level=info msg="containerd successfully booted in 0.429175s" Jan 23 19:54:45.215885 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 19:54:45.384204 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 19:54:45.399881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 19:54:45.478006 polkitd[1647]: Started polkitd version 126 Jan 23 19:54:45.493123 polkitd[1647]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 19:54:45.493615 polkitd[1647]: Loading rules from directory /run/polkit-1/rules.d Jan 23 19:54:45.493695 polkitd[1647]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 19:54:45.494056 polkitd[1647]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 19:54:45.494098 polkitd[1647]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 19:54:45.494163 polkitd[1647]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 19:54:45.505350 polkitd[1647]: Finished loading, compiling and executing 2 rules Jan 23 19:54:45.508180 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 19:54:45.516143 dbus-daemon[1544]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 19:54:45.519397 polkitd[1647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 19:54:45.560061 systemd-hostnamed[1590]: Hostname set to (static) Jan 23 19:54:45.595937 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 19:54:45.614981 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 19:54:45.618746 systemd[1]: Started sshd@0-10.230.78.134:22-68.220.241.50:50446.service - OpenSSH per-connection server daemon (68.220.241.50:50446). Jan 23 19:54:45.684053 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 19:54:45.684603 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 19:54:45.693312 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 19:54:45.785164 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 19:54:45.796360 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 19:54:45.873215 systemd-networkd[1500]: eth0: Gained IPv6LL Jan 23 19:54:45.889847 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 19:54:45.968914 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 19:54:45.970927 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 19:54:45.974432 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 19:54:45.983019 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 19:54:46.010574 systemd-logind[1561]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 19:54:46.080508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:54:46.140162 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 19:54:46.145662 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 19:54:46.266729 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 19:54:46.316090 tar[1581]: linux-amd64/README.md Jan 23 19:54:46.346354 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 19:54:46.380006 sshd[1677]: Accepted publickey for core from 68.220.241.50 port 50446 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:54:46.384368 sshd-session[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:54:46.413592 systemd-logind[1561]: New session 1 of user core. Jan 23 19:54:46.414389 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 19:54:46.421207 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 19:54:46.481933 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 19:54:46.487734 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 19:54:46.510618 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 19:54:46.515481 systemd-logind[1561]: New session c1 of user core. Jan 23 19:54:46.707708 systemd[1709]: Queued start job for default target default.target. Jan 23 19:54:46.715283 systemd[1709]: Created slice app.slice - User Application Slice. Jan 23 19:54:46.715321 systemd[1709]: Reached target paths.target - Paths. Jan 23 19:54:46.715403 systemd[1709]: Reached target timers.target - Timers. Jan 23 19:54:46.721933 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 19:54:46.744365 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 19:54:46.745358 systemd[1709]: Reached target sockets.target - Sockets. Jan 23 19:54:46.745524 systemd[1709]: Reached target basic.target - Basic System. Jan 23 19:54:46.745642 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 19:54:46.749073 systemd[1709]: Reached target default.target - Main User Target. Jan 23 19:54:46.749171 systemd[1709]: Startup finished in 221ms. Jan 23 19:54:46.764626 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 19:54:46.851668 systemd-networkd[1500]: eth0: Ignoring DHCPv6 address 2a02:1348:179:93a1:24:19ff:fee6:4e86/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:93a1:24:19ff:fee6:4e86/64 assigned by NDisc. Jan 23 19:54:46.851682 systemd-networkd[1500]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 23 19:54:47.112593 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:47.116842 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:47.191262 systemd[1]: Started sshd@1-10.230.78.134:22-68.220.241.50:50454.service - OpenSSH per-connection server daemon (68.220.241.50:50454). Jan 23 19:54:47.772131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:54:47.774886 sshd[1723]: Accepted publickey for core from 68.220.241.50 port 50454 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:54:47.777629 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:54:47.784412 (kubelet)[1731]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:54:47.786680 systemd-logind[1561]: New session 2 of user core. Jan 23 19:54:47.796372 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 19:54:48.199857 sshd[1732]: Connection closed by 68.220.241.50 port 50454 Jan 23 19:54:48.200795 sshd-session[1723]: pam_unix(sshd:session): session closed for user core Jan 23 19:54:48.209097 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Jan 23 19:54:48.211549 systemd[1]: sshd@1-10.230.78.134:22-68.220.241.50:50454.service: Deactivated successfully. Jan 23 19:54:48.215466 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 19:54:48.219009 systemd-logind[1561]: Removed session 2. Jan 23 19:54:48.310171 systemd[1]: Started sshd@2-10.230.78.134:22-68.220.241.50:50470.service - OpenSSH per-connection server daemon (68.220.241.50:50470). Jan 23 19:54:48.556453 kubelet[1731]: E0123 19:54:48.556259 1731 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:54:48.561192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:54:48.561621 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:54:48.562818 systemd[1]: kubelet.service: Consumed 1.569s CPU time, 264.7M memory peak. Jan 23 19:54:48.900443 sshd[1742]: Accepted publickey for core from 68.220.241.50 port 50470 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:54:48.902364 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:54:48.909133 systemd-logind[1561]: New session 3 of user core. Jan 23 19:54:48.923161 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 19:54:49.128925 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:49.143848 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:49.304594 sshd[1747]: Connection closed by 68.220.241.50 port 50470 Jan 23 19:54:49.305588 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jan 23 19:54:49.311606 systemd[1]: sshd@2-10.230.78.134:22-68.220.241.50:50470.service: Deactivated successfully. Jan 23 19:54:49.314733 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 19:54:49.316578 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Jan 23 19:54:49.320097 systemd-logind[1561]: Removed session 3. Jan 23 19:54:50.981725 login[1688]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 19:54:50.992497 systemd-logind[1561]: New session 4 of user core. Jan 23 19:54:50.998944 login[1687]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 19:54:51.001171 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 19:54:51.014252 systemd-logind[1561]: New session 5 of user core. Jan 23 19:54:51.026368 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 19:54:53.157926 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:53.161859 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 23 19:54:53.171870 coreos-metadata[1543]: Jan 23 19:54:53.170 WARN failed to locate config-drive, using the metadata service API instead Jan 23 19:54:53.177896 coreos-metadata[1626]: Jan 23 19:54:53.176 WARN failed to locate config-drive, using the metadata service API instead Jan 23 19:54:53.197724 coreos-metadata[1543]: Jan 23 19:54:53.197 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 23 19:54:53.198294 coreos-metadata[1626]: Jan 23 19:54:53.198 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 23 19:54:53.204177 coreos-metadata[1543]: Jan 23 19:54:53.204 INFO Fetch failed with 404: resource not found Jan 23 19:54:53.204284 coreos-metadata[1543]: Jan 23 19:54:53.204 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 23 19:54:53.205182 coreos-metadata[1543]: Jan 23 19:54:53.205 INFO Fetch successful Jan 23 19:54:53.205290 coreos-metadata[1543]: Jan 23 19:54:53.205 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 23 19:54:53.216636 coreos-metadata[1543]: Jan 23 19:54:53.216 INFO Fetch successful Jan 23 19:54:53.216636 coreos-metadata[1543]: Jan 23 19:54:53.216 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 23 19:54:53.223138 coreos-metadata[1626]: Jan 23 19:54:53.223 INFO Fetch successful Jan 23 19:54:53.223406 coreos-metadata[1626]: Jan 23 19:54:53.223 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 19:54:53.231122 coreos-metadata[1543]: Jan 23 19:54:53.231 INFO Fetch successful Jan 23 19:54:53.231339 coreos-metadata[1543]: Jan 23 19:54:53.231 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 23 19:54:53.243352 coreos-metadata[1543]: Jan 23 19:54:53.243 INFO Fetch successful Jan 23 19:54:53.243631 coreos-metadata[1543]: Jan 23 19:54:53.243 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 23 19:54:53.249863 coreos-metadata[1626]: Jan 23 19:54:53.249 INFO Fetch successful Jan 23 19:54:53.259530 unknown[1626]: wrote ssh authorized keys file for user: core Jan 23 19:54:53.260328 coreos-metadata[1543]: Jan 23 19:54:53.260 INFO Fetch successful Jan 23 19:54:53.294173 update-ssh-keys[1785]: Updated "/home/core/.ssh/authorized_keys" Jan 23 19:54:53.295292 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 19:54:53.296920 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 19:54:53.298104 systemd[1]: Finished sshkeys.service. Jan 23 19:54:53.303176 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 19:54:53.303660 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 19:54:53.307961 systemd[1]: Startup finished in 3.632s (kernel) + 14.734s (initrd) + 13.284s (userspace) = 31.652s. Jan 23 19:54:58.592898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 19:54:58.596197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:54:58.816986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:54:58.827406 (kubelet)[1800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:54:58.884548 kubelet[1800]: E0123 19:54:58.884370 1800 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:54:58.890121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:54:58.890403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:54:58.891352 systemd[1]: kubelet.service: Consumed 249ms CPU time, 108.3M memory peak. Jan 23 19:54:59.408940 systemd[1]: Started sshd@3-10.230.78.134:22-68.220.241.50:37450.service - OpenSSH per-connection server daemon (68.220.241.50:37450). Jan 23 19:54:59.990845 sshd[1808]: Accepted publickey for core from 68.220.241.50 port 37450 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:54:59.993383 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:55:00.003008 systemd-logind[1561]: New session 6 of user core. Jan 23 19:55:00.024333 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 19:55:00.395398 sshd[1811]: Connection closed by 68.220.241.50 port 37450 Jan 23 19:55:00.396602 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jan 23 19:55:00.402738 systemd[1]: sshd@3-10.230.78.134:22-68.220.241.50:37450.service: Deactivated successfully. Jan 23 19:55:00.406130 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 19:55:00.408886 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 23 19:55:00.410563 systemd-logind[1561]: Removed session 6. Jan 23 19:55:00.497446 systemd[1]: Started sshd@4-10.230.78.134:22-68.220.241.50:37452.service - OpenSSH per-connection server daemon (68.220.241.50:37452). Jan 23 19:55:01.082598 sshd[1817]: Accepted publickey for core from 68.220.241.50 port 37452 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:55:01.084858 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:55:01.093474 systemd-logind[1561]: New session 7 of user core. Jan 23 19:55:01.103163 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 19:55:01.483773 sshd[1820]: Connection closed by 68.220.241.50 port 37452 Jan 23 19:55:01.482570 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 23 19:55:01.489225 systemd[1]: sshd@4-10.230.78.134:22-68.220.241.50:37452.service: Deactivated successfully. Jan 23 19:55:01.491963 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 19:55:01.493420 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 23 19:55:01.496124 systemd-logind[1561]: Removed session 7. Jan 23 19:55:01.588046 systemd[1]: Started sshd@5-10.230.78.134:22-68.220.241.50:37460.service - OpenSSH per-connection server daemon (68.220.241.50:37460). Jan 23 19:55:02.169487 sshd[1826]: Accepted publickey for core from 68.220.241.50 port 37460 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:55:02.171534 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:55:02.179290 systemd-logind[1561]: New session 8 of user core. Jan 23 19:55:02.188104 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 19:55:02.575742 sshd[1829]: Connection closed by 68.220.241.50 port 37460 Jan 23 19:55:02.575182 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Jan 23 19:55:02.582366 systemd[1]: sshd@5-10.230.78.134:22-68.220.241.50:37460.service: Deactivated successfully. Jan 23 19:55:02.585046 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 19:55:02.587170 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 23 19:55:02.590076 systemd-logind[1561]: Removed session 8. Jan 23 19:55:02.678316 systemd[1]: Started sshd@6-10.230.78.134:22-68.220.241.50:53184.service - OpenSSH per-connection server daemon (68.220.241.50:53184). Jan 23 19:55:03.262861 sshd[1835]: Accepted publickey for core from 68.220.241.50 port 53184 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:55:03.264379 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:55:03.272924 systemd-logind[1561]: New session 9 of user core. Jan 23 19:55:03.284142 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 19:55:03.592400 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 19:55:03.592980 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:55:03.610963 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 23 19:55:03.700884 sshd[1838]: Connection closed by 68.220.241.50 port 53184 Jan 23 19:55:03.702204 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jan 23 19:55:03.708299 systemd[1]: sshd@6-10.230.78.134:22-68.220.241.50:53184.service: Deactivated successfully. Jan 23 19:55:03.711429 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 19:55:03.712700 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 23 19:55:03.714992 systemd-logind[1561]: Removed session 9. Jan 23 19:55:03.807075 systemd[1]: Started sshd@7-10.230.78.134:22-68.220.241.50:53186.service - OpenSSH per-connection server daemon (68.220.241.50:53186). Jan 23 19:55:04.390216 sshd[1845]: Accepted publickey for core from 68.220.241.50 port 53186 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:55:04.392193 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:55:04.399738 systemd-logind[1561]: New session 10 of user core. Jan 23 19:55:04.411102 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 19:55:04.760419 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 19:55:04.761292 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:55:04.768015 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 23 19:55:04.777400 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 19:55:04.778263 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:55:04.792063 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 19:55:04.862352 augenrules[1872]: No rules Jan 23 19:55:04.863380 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 19:55:04.863925 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 19:55:04.865516 sudo[1849]: pam_unix(sudo:session): session closed for user root Jan 23 19:55:04.956377 sshd[1848]: Connection closed by 68.220.241.50 port 53186 Jan 23 19:55:04.957358 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Jan 23 19:55:04.964239 systemd[1]: sshd@7-10.230.78.134:22-68.220.241.50:53186.service: Deactivated successfully. Jan 23 19:55:04.966614 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 19:55:04.968421 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 23 19:55:04.969973 systemd-logind[1561]: Removed session 10. Jan 23 19:55:05.058077 systemd[1]: Started sshd@8-10.230.78.134:22-68.220.241.50:53200.service - OpenSSH per-connection server daemon (68.220.241.50:53200). Jan 23 19:55:05.638405 sshd[1881]: Accepted publickey for core from 68.220.241.50 port 53200 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:55:05.640272 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:55:05.648311 systemd-logind[1561]: New session 11 of user core. Jan 23 19:55:05.654081 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 19:55:05.955147 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 19:55:05.956627 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 19:55:06.655135 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 19:55:06.677488 (dockerd)[1903]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 19:55:07.189853 dockerd[1903]: time="2026-01-23T19:55:07.189736463Z" level=info msg="Starting up" Jan 23 19:55:07.191478 dockerd[1903]: time="2026-01-23T19:55:07.191439981Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 19:55:07.224930 dockerd[1903]: time="2026-01-23T19:55:07.224856373Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 19:55:07.256578 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1547802624-merged.mount: Deactivated successfully. Jan 23 19:55:07.288484 dockerd[1903]: time="2026-01-23T19:55:07.288116239Z" level=info msg="Loading containers: start." Jan 23 19:55:07.318843 kernel: Initializing XFRM netlink socket Jan 23 19:55:07.783007 systemd-networkd[1500]: docker0: Link UP Jan 23 19:55:07.787417 dockerd[1903]: time="2026-01-23T19:55:07.787111661Z" level=info msg="Loading containers: done." Jan 23 19:55:07.815435 dockerd[1903]: time="2026-01-23T19:55:07.815302694Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 19:55:07.815658 dockerd[1903]: time="2026-01-23T19:55:07.815455152Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 19:55:07.815658 dockerd[1903]: time="2026-01-23T19:55:07.815632057Z" level=info msg="Initializing buildkit" Jan 23 19:55:07.817358 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3412951748-merged.mount: Deactivated successfully. Jan 23 19:55:07.846400 dockerd[1903]: time="2026-01-23T19:55:07.846345566Z" level=info msg="Completed buildkit initialization" Jan 23 19:55:07.856997 dockerd[1903]: time="2026-01-23T19:55:07.856919131Z" level=info msg="Daemon has completed initialization" Jan 23 19:55:07.857196 dockerd[1903]: time="2026-01-23T19:55:07.857026282Z" level=info msg="API listen on /run/docker.sock" Jan 23 19:55:07.858042 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 19:55:09.051786 containerd[1583]: time="2026-01-23T19:55:09.051646231Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 19:55:09.092665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 19:55:09.097122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:09.508837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:09.523639 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:55:09.745875 kubelet[2125]: E0123 19:55:09.745345 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:55:09.750145 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:55:09.750418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:55:09.751083 systemd[1]: kubelet.service: Consumed 574ms CPU time, 109.3M memory peak. Jan 23 19:55:10.004127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1795256149.mount: Deactivated successfully. Jan 23 19:55:12.088600 containerd[1583]: time="2026-01-23T19:55:12.088463271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:12.091212 containerd[1583]: time="2026-01-23T19:55:12.091151991Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 23 19:55:12.092079 containerd[1583]: time="2026-01-23T19:55:12.092026147Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:12.096614 containerd[1583]: time="2026-01-23T19:55:12.096543247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:12.099235 containerd[1583]: time="2026-01-23T19:55:12.099173108Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.047391657s" Jan 23 19:55:12.099335 containerd[1583]: time="2026-01-23T19:55:12.099252464Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 23 19:55:12.101062 containerd[1583]: time="2026-01-23T19:55:12.101015666Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 19:55:14.499775 containerd[1583]: time="2026-01-23T19:55:14.499651256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:14.501224 containerd[1583]: time="2026-01-23T19:55:14.501169970Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 23 19:55:14.506244 containerd[1583]: time="2026-01-23T19:55:14.506153386Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:14.511168 containerd[1583]: time="2026-01-23T19:55:14.510889385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:14.511168 containerd[1583]: time="2026-01-23T19:55:14.511002041Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.409945553s" Jan 23 19:55:14.511168 containerd[1583]: time="2026-01-23T19:55:14.511040205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 23 19:55:14.512304 containerd[1583]: time="2026-01-23T19:55:14.512219152Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 19:55:16.532632 containerd[1583]: time="2026-01-23T19:55:16.532546413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:16.534240 containerd[1583]: time="2026-01-23T19:55:16.533911390Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 23 19:55:16.535109 containerd[1583]: time="2026-01-23T19:55:16.535070898Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:16.540427 containerd[1583]: time="2026-01-23T19:55:16.540387168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:16.541638 containerd[1583]: time="2026-01-23T19:55:16.541603744Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.029119587s" Jan 23 19:55:16.542006 containerd[1583]: time="2026-01-23T19:55:16.541780202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 23 19:55:16.542572 containerd[1583]: time="2026-01-23T19:55:16.542530129Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 19:55:16.873628 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 19:55:18.387074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579049379.mount: Deactivated successfully. Jan 23 19:55:19.374422 containerd[1583]: time="2026-01-23T19:55:19.374330870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:19.376288 containerd[1583]: time="2026-01-23T19:55:19.376219080Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 23 19:55:19.377829 containerd[1583]: time="2026-01-23T19:55:19.377439489Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:19.380840 containerd[1583]: time="2026-01-23T19:55:19.380495994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:19.384191 containerd[1583]: time="2026-01-23T19:55:19.384134926Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.841459652s" Jan 23 19:55:19.384382 containerd[1583]: time="2026-01-23T19:55:19.384347196Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 23 19:55:19.385477 containerd[1583]: time="2026-01-23T19:55:19.385365808Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 19:55:19.844085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 19:55:19.847554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:20.008167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145398645.mount: Deactivated successfully. Jan 23 19:55:20.237982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:20.247276 (kubelet)[2221]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:55:20.336240 kubelet[2221]: E0123 19:55:20.336175 2221 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:55:20.341202 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:55:20.341437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:55:20.342214 systemd[1]: kubelet.service: Consumed 346ms CPU time, 108.7M memory peak. Jan 23 19:55:21.720842 containerd[1583]: time="2026-01-23T19:55:21.720311158Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:21.722980 containerd[1583]: time="2026-01-23T19:55:21.722951478Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 23 19:55:21.724943 containerd[1583]: time="2026-01-23T19:55:21.723830880Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:21.728547 containerd[1583]: time="2026-01-23T19:55:21.728488384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:21.731566 containerd[1583]: time="2026-01-23T19:55:21.731530050Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.346106402s" Jan 23 19:55:21.731748 containerd[1583]: time="2026-01-23T19:55:21.731718538Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 23 19:55:21.732962 containerd[1583]: time="2026-01-23T19:55:21.732937890Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 19:55:22.295641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4290894217.mount: Deactivated successfully. Jan 23 19:55:22.301866 containerd[1583]: time="2026-01-23T19:55:22.301549613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:55:22.302501 containerd[1583]: time="2026-01-23T19:55:22.302464858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 19:55:22.303492 containerd[1583]: time="2026-01-23T19:55:22.303144235Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:55:22.305674 containerd[1583]: time="2026-01-23T19:55:22.305640384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 19:55:22.306473 containerd[1583]: time="2026-01-23T19:55:22.306436691Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.363297ms" Jan 23 19:55:22.306547 containerd[1583]: time="2026-01-23T19:55:22.306478476Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 19:55:22.307349 containerd[1583]: time="2026-01-23T19:55:22.307298395Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 19:55:22.861661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586377958.mount: Deactivated successfully. Jan 23 19:55:26.005518 containerd[1583]: time="2026-01-23T19:55:26.005454686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:26.007437 containerd[1583]: time="2026-01-23T19:55:26.007407767Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 23 19:55:26.007885 containerd[1583]: time="2026-01-23T19:55:26.007840940Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:26.011163 containerd[1583]: time="2026-01-23T19:55:26.011132945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:26.012666 containerd[1583]: time="2026-01-23T19:55:26.012622112Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.705284196s" Jan 23 19:55:26.012871 containerd[1583]: time="2026-01-23T19:55:26.012814999Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 23 19:55:29.668972 update_engine[1563]: I20260123 19:55:29.668766 1563 update_attempter.cc:509] Updating boot flags... Jan 23 19:55:30.342573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 19:55:30.347232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:30.656021 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:30.667510 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 19:55:30.735612 kubelet[2378]: E0123 19:55:30.735470 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 19:55:30.737447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 19:55:30.737678 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 19:55:30.738503 systemd[1]: kubelet.service: Consumed 235ms CPU time, 110.2M memory peak. Jan 23 19:55:31.564083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:31.564344 systemd[1]: kubelet.service: Consumed 235ms CPU time, 110.2M memory peak. Jan 23 19:55:31.573625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:31.615698 systemd[1]: Reload requested from client PID 2392 ('systemctl') (unit session-11.scope)... Jan 23 19:55:31.615770 systemd[1]: Reloading... Jan 23 19:55:31.829233 zram_generator::config[2437]: No configuration found. Jan 23 19:55:32.129777 systemd[1]: Reloading finished in 513 ms. Jan 23 19:55:32.219043 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 19:55:32.219201 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 19:55:32.219779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:32.219913 systemd[1]: kubelet.service: Consumed 157ms CPU time, 98.4M memory peak. Jan 23 19:55:32.222675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:32.415052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:32.427383 (kubelet)[2505]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:55:32.513697 kubelet[2505]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:55:32.514419 kubelet[2505]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:55:32.514505 kubelet[2505]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:55:32.514882 kubelet[2505]: I0123 19:55:32.514800 2505 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:55:32.967655 kubelet[2505]: I0123 19:55:32.967544 2505 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 19:55:32.967655 kubelet[2505]: I0123 19:55:32.967624 2505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:55:32.968162 kubelet[2505]: I0123 19:55:32.968123 2505 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 19:55:33.010795 kubelet[2505]: I0123 19:55:33.010078 2505 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:55:33.012141 kubelet[2505]: E0123 19:55:33.012101 2505 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.78.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:33.038272 kubelet[2505]: I0123 19:55:33.038226 2505 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:55:33.050042 kubelet[2505]: I0123 19:55:33.050018 2505 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:55:33.054040 kubelet[2505]: I0123 19:55:33.053987 2505 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:55:33.054526 kubelet[2505]: I0123 19:55:33.054146 2505 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hs5p8.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:55:33.056845 kubelet[2505]: I0123 19:55:33.056795 2505 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:55:33.057406 kubelet[2505]: I0123 19:55:33.056992 2505 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 19:55:33.058462 kubelet[2505]: I0123 19:55:33.058438 2505 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:55:33.062902 kubelet[2505]: I0123 19:55:33.062864 2505 kubelet.go:446] "Attempting to sync node with API server" Jan 23 19:55:33.063150 kubelet[2505]: I0123 19:55:33.063126 2505 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:55:33.065754 kubelet[2505]: I0123 19:55:33.065513 2505 kubelet.go:352] "Adding apiserver pod source" Jan 23 19:55:33.065754 kubelet[2505]: I0123 19:55:33.065635 2505 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:55:33.067677 kubelet[2505]: W0123 19:55:33.067576 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.78.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hs5p8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:33.067757 kubelet[2505]: E0123 19:55:33.067704 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.78.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hs5p8.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:33.070423 kubelet[2505]: W0123 19:55:33.070345 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.78.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:33.070757 kubelet[2505]: E0123 19:55:33.070721 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.78.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:33.072691 kubelet[2505]: I0123 19:55:33.072652 2505 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:55:33.076433 kubelet[2505]: I0123 19:55:33.076408 2505 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 19:55:33.077452 kubelet[2505]: W0123 19:55:33.077431 2505 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 19:55:33.081068 kubelet[2505]: I0123 19:55:33.081044 2505 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:55:33.081283 kubelet[2505]: I0123 19:55:33.081262 2505 server.go:1287] "Started kubelet" Jan 23 19:55:33.083598 kubelet[2505]: I0123 19:55:33.083432 2505 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:55:33.086519 kubelet[2505]: I0123 19:55:33.085693 2505 server.go:479] "Adding debug handlers to kubelet server" Jan 23 19:55:33.089332 kubelet[2505]: I0123 19:55:33.089198 2505 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:55:33.090039 kubelet[2505]: I0123 19:55:33.090005 2505 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:55:33.091509 kubelet[2505]: I0123 19:55:33.091476 2505 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:55:33.094831 kubelet[2505]: E0123 19:55:33.091258 2505 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.78.134:6443/api/v1/namespaces/default/events\": dial tcp 10.230.78.134:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-hs5p8.gb1.brightbox.com.188d74541396c729 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hs5p8.gb1.brightbox.com,UID:srv-hs5p8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hs5p8.gb1.brightbox.com,},FirstTimestamp:2026-01-23 19:55:33.081196329 +0000 UTC m=+0.646575709,LastTimestamp:2026-01-23 19:55:33.081196329 +0000 UTC m=+0.646575709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hs5p8.gb1.brightbox.com,}" Jan 23 19:55:33.094831 kubelet[2505]: I0123 19:55:33.094155 2505 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:55:33.100496 kubelet[2505]: E0123 19:55:33.100462 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" Jan 23 19:55:33.100793 kubelet[2505]: I0123 19:55:33.100754 2505 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:55:33.101375 kubelet[2505]: I0123 19:55:33.101351 2505 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:55:33.101628 kubelet[2505]: I0123 19:55:33.101607 2505 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:55:33.102423 kubelet[2505]: W0123 19:55:33.102362 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.78.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:33.102581 kubelet[2505]: E0123 19:55:33.102552 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.78.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:33.103156 kubelet[2505]: E0123 19:55:33.103119 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hs5p8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.134:6443: connect: connection refused" interval="200ms" Jan 23 19:55:33.115960 kubelet[2505]: E0123 19:55:33.115916 2505 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:55:33.116485 kubelet[2505]: I0123 19:55:33.116459 2505 factory.go:221] Registration of the containerd container factory successfully Jan 23 19:55:33.116624 kubelet[2505]: I0123 19:55:33.116604 2505 factory.go:221] Registration of the systemd container factory successfully Jan 23 19:55:33.116863 kubelet[2505]: I0123 19:55:33.116835 2505 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:55:33.139425 kubelet[2505]: I0123 19:55:33.138058 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 19:55:33.139875 kubelet[2505]: I0123 19:55:33.139814 2505 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 19:55:33.139994 kubelet[2505]: I0123 19:55:33.139941 2505 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 19:55:33.140054 kubelet[2505]: I0123 19:55:33.140027 2505 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:55:33.140054 kubelet[2505]: I0123 19:55:33.140044 2505 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 19:55:33.140245 kubelet[2505]: E0123 19:55:33.140182 2505 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:55:33.151343 kubelet[2505]: W0123 19:55:33.150308 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.78.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:33.151343 kubelet[2505]: E0123 19:55:33.150388 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.78.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:33.165799 kubelet[2505]: I0123 19:55:33.165745 2505 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:55:33.165799 kubelet[2505]: I0123 19:55:33.165768 2505 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:55:33.166028 kubelet[2505]: I0123 19:55:33.165852 2505 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:55:33.168040 kubelet[2505]: I0123 19:55:33.168018 2505 policy_none.go:49] "None policy: Start" Jan 23 19:55:33.168143 kubelet[2505]: I0123 19:55:33.168080 2505 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:55:33.168143 kubelet[2505]: I0123 19:55:33.168132 2505 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:55:33.193657 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 19:55:33.201294 kubelet[2505]: E0123 19:55:33.201258 2505 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" Jan 23 19:55:33.208673 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 19:55:33.214336 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 19:55:33.237381 kubelet[2505]: I0123 19:55:33.235754 2505 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 19:55:33.239564 kubelet[2505]: I0123 19:55:33.239504 2505 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:55:33.240924 kubelet[2505]: E0123 19:55:33.240898 2505 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:55:33.241090 kubelet[2505]: I0123 19:55:33.240723 2505 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:55:33.244172 kubelet[2505]: E0123 19:55:33.244060 2505 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:55:33.244457 kubelet[2505]: E0123 19:55:33.244420 2505 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-hs5p8.gb1.brightbox.com\" not found" Jan 23 19:55:33.245942 kubelet[2505]: I0123 19:55:33.245905 2505 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:55:33.304853 kubelet[2505]: E0123 19:55:33.304779 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hs5p8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.134:6443: connect: connection refused" interval="400ms" Jan 23 19:55:33.346320 kubelet[2505]: I0123 19:55:33.346169 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.347734 kubelet[2505]: E0123 19:55:33.347661 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.134:6443/api/v1/nodes\": dial tcp 10.230.78.134:6443: connect: connection refused" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.459120 systemd[1]: Created slice kubepods-burstable-pod3bed4a4577bc711f6d193deed0c5e2a3.slice - libcontainer container kubepods-burstable-pod3bed4a4577bc711f6d193deed0c5e2a3.slice. Jan 23 19:55:33.483899 kubelet[2505]: E0123 19:55:33.483710 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.489157 systemd[1]: Created slice kubepods-burstable-podeebba25d2e16863884a10abe583725fd.slice - libcontainer container kubepods-burstable-podeebba25d2e16863884a10abe583725fd.slice. Jan 23 19:55:33.493180 kubelet[2505]: E0123 19:55:33.493156 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.497755 systemd[1]: Created slice kubepods-burstable-poda20d4c1580f5afb4ec4d320b2c722ea3.slice - libcontainer container kubepods-burstable-poda20d4c1580f5afb4ec4d320b2c722ea3.slice. Jan 23 19:55:33.504279 kubelet[2505]: E0123 19:55:33.504231 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.504771 kubelet[2505]: I0123 19:55:33.504734 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bed4a4577bc711f6d193deed0c5e2a3-k8s-certs\") pod \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" (UID: \"3bed4a4577bc711f6d193deed0c5e2a3\") " pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.504897 kubelet[2505]: I0123 19:55:33.504779 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bed4a4577bc711f6d193deed0c5e2a3-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" (UID: \"3bed4a4577bc711f6d193deed0c5e2a3\") " pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.504897 kubelet[2505]: I0123 19:55:33.504838 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-flexvolume-dir\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.504897 kubelet[2505]: I0123 19:55:33.504881 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-k8s-certs\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.505042 kubelet[2505]: I0123 19:55:33.504924 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-kubeconfig\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.505042 kubelet[2505]: I0123 19:55:33.504953 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.505042 kubelet[2505]: I0123 19:55:33.504979 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bed4a4577bc711f6d193deed0c5e2a3-ca-certs\") pod \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" (UID: \"3bed4a4577bc711f6d193deed0c5e2a3\") " pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.505042 kubelet[2505]: I0123 19:55:33.505016 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20d4c1580f5afb4ec4d320b2c722ea3-kubeconfig\") pod \"kube-scheduler-srv-hs5p8.gb1.brightbox.com\" (UID: \"a20d4c1580f5afb4ec4d320b2c722ea3\") " pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.505231 kubelet[2505]: I0123 19:55:33.505046 2505 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-ca-certs\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.551489 kubelet[2505]: I0123 19:55:33.551390 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.552249 kubelet[2505]: E0123 19:55:33.552087 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.134:6443/api/v1/nodes\": dial tcp 10.230.78.134:6443: connect: connection refused" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.706757 kubelet[2505]: E0123 19:55:33.706641 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hs5p8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.134:6443: connect: connection refused" interval="800ms" Jan 23 19:55:33.789703 containerd[1583]: time="2026-01-23T19:55:33.789486512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hs5p8.gb1.brightbox.com,Uid:3bed4a4577bc711f6d193deed0c5e2a3,Namespace:kube-system,Attempt:0,}" Jan 23 19:55:33.800957 containerd[1583]: time="2026-01-23T19:55:33.800906010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hs5p8.gb1.brightbox.com,Uid:eebba25d2e16863884a10abe583725fd,Namespace:kube-system,Attempt:0,}" Jan 23 19:55:33.808129 containerd[1583]: time="2026-01-23T19:55:33.807603806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hs5p8.gb1.brightbox.com,Uid:a20d4c1580f5afb4ec4d320b2c722ea3,Namespace:kube-system,Attempt:0,}" Jan 23 19:55:33.957633 kubelet[2505]: I0123 19:55:33.957581 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.958342 kubelet[2505]: E0123 19:55:33.958296 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.134:6443/api/v1/nodes\": dial tcp 10.230.78.134:6443: connect: connection refused" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:33.968225 containerd[1583]: time="2026-01-23T19:55:33.968127830Z" level=info msg="connecting to shim efad1eb51ec335c5cfe1afa644fdf56071e45680b14a5f88879a930651fba14d" address="unix:///run/containerd/s/e922356b29a836b0acc172d5c188d66e839af93ed831ee1b1a502cbca01a445a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:55:33.969152 containerd[1583]: time="2026-01-23T19:55:33.969032436Z" level=info msg="connecting to shim 9fc113679a4eae22d31d23a4d8eabb6514be1d8c7319992a93c1fc000897c116" address="unix:///run/containerd/s/e682c85cd6deb6d037107260e4ef99dc905b77aca861326ead9a7ff5efc9e5d7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:55:33.974818 containerd[1583]: time="2026-01-23T19:55:33.974770156Z" level=info msg="connecting to shim 33b27a78bf80a2faa24efb4a01cb19e1dd1e86d4b268c89d4ef5889820c12dc7" address="unix:///run/containerd/s/9070b9e6672565b2a29dfc2cd7ded88c00758d6d89cfb502f60da36653ae223c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:55:34.117846 kubelet[2505]: W0123 19:55:34.117313 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.78.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hs5p8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:34.117846 kubelet[2505]: E0123 19:55:34.117420 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.78.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hs5p8.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:34.122041 systemd[1]: Started cri-containerd-33b27a78bf80a2faa24efb4a01cb19e1dd1e86d4b268c89d4ef5889820c12dc7.scope - libcontainer container 33b27a78bf80a2faa24efb4a01cb19e1dd1e86d4b268c89d4ef5889820c12dc7. Jan 23 19:55:34.133563 systemd[1]: Started cri-containerd-9fc113679a4eae22d31d23a4d8eabb6514be1d8c7319992a93c1fc000897c116.scope - libcontainer container 9fc113679a4eae22d31d23a4d8eabb6514be1d8c7319992a93c1fc000897c116. Jan 23 19:55:34.137676 systemd[1]: Started cri-containerd-efad1eb51ec335c5cfe1afa644fdf56071e45680b14a5f88879a930651fba14d.scope - libcontainer container efad1eb51ec335c5cfe1afa644fdf56071e45680b14a5f88879a930651fba14d. Jan 23 19:55:34.295166 containerd[1583]: time="2026-01-23T19:55:34.294325035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hs5p8.gb1.brightbox.com,Uid:3bed4a4577bc711f6d193deed0c5e2a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fc113679a4eae22d31d23a4d8eabb6514be1d8c7319992a93c1fc000897c116\"" Jan 23 19:55:34.313860 containerd[1583]: time="2026-01-23T19:55:34.312271243Z" level=info msg="CreateContainer within sandbox \"9fc113679a4eae22d31d23a4d8eabb6514be1d8c7319992a93c1fc000897c116\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 19:55:34.314108 kubelet[2505]: W0123 19:55:34.314043 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.78.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:34.314201 kubelet[2505]: E0123 19:55:34.314131 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.78.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:34.332054 containerd[1583]: time="2026-01-23T19:55:34.331987036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hs5p8.gb1.brightbox.com,Uid:eebba25d2e16863884a10abe583725fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"33b27a78bf80a2faa24efb4a01cb19e1dd1e86d4b268c89d4ef5889820c12dc7\"" Jan 23 19:55:34.339008 containerd[1583]: time="2026-01-23T19:55:34.338970113Z" level=info msg="CreateContainer within sandbox \"33b27a78bf80a2faa24efb4a01cb19e1dd1e86d4b268c89d4ef5889820c12dc7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 19:55:34.341820 containerd[1583]: time="2026-01-23T19:55:34.341759637Z" level=info msg="Container 3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:55:34.369834 containerd[1583]: time="2026-01-23T19:55:34.368803884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hs5p8.gb1.brightbox.com,Uid:a20d4c1580f5afb4ec4d320b2c722ea3,Namespace:kube-system,Attempt:0,} returns sandbox id \"efad1eb51ec335c5cfe1afa644fdf56071e45680b14a5f88879a930651fba14d\"" Jan 23 19:55:34.375127 containerd[1583]: time="2026-01-23T19:55:34.375067145Z" level=info msg="CreateContainer within sandbox \"efad1eb51ec335c5cfe1afa644fdf56071e45680b14a5f88879a930651fba14d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 19:55:34.377043 containerd[1583]: time="2026-01-23T19:55:34.377013077Z" level=info msg="CreateContainer within sandbox \"9fc113679a4eae22d31d23a4d8eabb6514be1d8c7319992a93c1fc000897c116\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db\"" Jan 23 19:55:34.378089 containerd[1583]: time="2026-01-23T19:55:34.377169215Z" level=info msg="Container 33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:55:34.380093 containerd[1583]: time="2026-01-23T19:55:34.380063644Z" level=info msg="StartContainer for \"3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db\"" Jan 23 19:55:34.391459 containerd[1583]: time="2026-01-23T19:55:34.391308653Z" level=info msg="connecting to shim 3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db" address="unix:///run/containerd/s/e682c85cd6deb6d037107260e4ef99dc905b77aca861326ead9a7ff5efc9e5d7" protocol=ttrpc version=3 Jan 23 19:55:34.403019 containerd[1583]: time="2026-01-23T19:55:34.402961678Z" level=info msg="CreateContainer within sandbox \"33b27a78bf80a2faa24efb4a01cb19e1dd1e86d4b268c89d4ef5889820c12dc7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c\"" Jan 23 19:55:34.404388 containerd[1583]: time="2026-01-23T19:55:34.403992430Z" level=info msg="StartContainer for \"33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c\"" Jan 23 19:55:34.405272 containerd[1583]: time="2026-01-23T19:55:34.405241603Z" level=info msg="Container 5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:55:34.406053 containerd[1583]: time="2026-01-23T19:55:34.406017139Z" level=info msg="connecting to shim 33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c" address="unix:///run/containerd/s/9070b9e6672565b2a29dfc2cd7ded88c00758d6d89cfb502f60da36653ae223c" protocol=ttrpc version=3 Jan 23 19:55:34.419447 containerd[1583]: time="2026-01-23T19:55:34.419386010Z" level=info msg="CreateContainer within sandbox \"efad1eb51ec335c5cfe1afa644fdf56071e45680b14a5f88879a930651fba14d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0\"" Jan 23 19:55:34.421516 containerd[1583]: time="2026-01-23T19:55:34.421443279Z" level=info msg="StartContainer for \"5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0\"" Jan 23 19:55:34.424290 containerd[1583]: time="2026-01-23T19:55:34.424253026Z" level=info msg="connecting to shim 5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0" address="unix:///run/containerd/s/e922356b29a836b0acc172d5c188d66e839af93ed831ee1b1a502cbca01a445a" protocol=ttrpc version=3 Jan 23 19:55:34.430296 kubelet[2505]: W0123 19:55:34.430009 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.78.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:34.431245 kubelet[2505]: E0123 19:55:34.430983 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.78.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:34.434245 kubelet[2505]: W0123 19:55:34.434154 2505 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.78.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.78.134:6443: connect: connection refused Jan 23 19:55:34.434491 kubelet[2505]: E0123 19:55:34.434250 2505 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.78.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:34.436045 systemd[1]: Started cri-containerd-3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db.scope - libcontainer container 3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db. Jan 23 19:55:34.510220 kubelet[2505]: E0123 19:55:34.509931 2505 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hs5p8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.134:6443: connect: connection refused" interval="1.6s" Jan 23 19:55:34.514641 systemd[1]: Started cri-containerd-33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c.scope - libcontainer container 33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c. Jan 23 19:55:34.536272 systemd[1]: Started cri-containerd-5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0.scope - libcontainer container 5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0. Jan 23 19:55:34.638340 containerd[1583]: time="2026-01-23T19:55:34.637214274Z" level=info msg="StartContainer for \"3b76e485cf0837f06cbeea9a6bf2ba29a13dab1fc171ee36eb221d02ba6e78db\" returns successfully" Jan 23 19:55:34.682167 containerd[1583]: time="2026-01-23T19:55:34.682097119Z" level=info msg="StartContainer for \"33b289f90aa685b08aaf2497d67c47984f8896239a5ddb7328d3bbbb0ccad09c\" returns successfully" Jan 23 19:55:34.694656 containerd[1583]: time="2026-01-23T19:55:34.694587080Z" level=info msg="StartContainer for \"5fd428c2b00c07ab5a6adca98258c3702d8465e694e3819aba9c4773902034e0\" returns successfully" Jan 23 19:55:34.764162 kubelet[2505]: I0123 19:55:34.764093 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:34.764794 kubelet[2505]: E0123 19:55:34.764653 2505 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.78.134:6443/api/v1/nodes\": dial tcp 10.230.78.134:6443: connect: connection refused" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:35.074254 kubelet[2505]: E0123 19:55:35.074030 2505 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.78.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.78.134:6443: connect: connection refused" logger="UnhandledError" Jan 23 19:55:35.171044 kubelet[2505]: E0123 19:55:35.171002 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:35.172461 kubelet[2505]: E0123 19:55:35.172435 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:35.178647 kubelet[2505]: E0123 19:55:35.178612 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:36.183501 kubelet[2505]: E0123 19:55:36.183347 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:36.184656 kubelet[2505]: E0123 19:55:36.183997 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:36.369895 kubelet[2505]: I0123 19:55:36.368049 2505 kubelet_node_status.go:75] "Attempting to register node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.197743 kubelet[2505]: E0123 19:55:37.197411 2505 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.706260 kubelet[2505]: E0123 19:55:37.706122 2505 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-hs5p8.gb1.brightbox.com\" not found" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.788546 kubelet[2505]: E0123 19:55:37.788306 2505 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-hs5p8.gb1.brightbox.com.188d74541396c729 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hs5p8.gb1.brightbox.com,UID:srv-hs5p8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hs5p8.gb1.brightbox.com,},FirstTimestamp:2026-01-23 19:55:33.081196329 +0000 UTC m=+0.646575709,LastTimestamp:2026-01-23 19:55:33.081196329 +0000 UTC m=+0.646575709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hs5p8.gb1.brightbox.com,}" Jan 23 19:55:37.843251 kubelet[2505]: I0123 19:55:37.843191 2505 kubelet_node_status.go:78] "Successfully registered node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.845913 kubelet[2505]: E0123 19:55:37.843570 2505 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-hs5p8.gb1.brightbox.com\": node \"srv-hs5p8.gb1.brightbox.com\" not found" Jan 23 19:55:37.857732 kubelet[2505]: E0123 19:55:37.857556 2505 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-hs5p8.gb1.brightbox.com.188d745415a7d6ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hs5p8.gb1.brightbox.com,UID:srv-hs5p8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-hs5p8.gb1.brightbox.com,},FirstTimestamp:2026-01-23 19:55:33.115868911 +0000 UTC m=+0.681248283,LastTimestamp:2026-01-23 19:55:33.115868911 +0000 UTC m=+0.681248283,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hs5p8.gb1.brightbox.com,}" Jan 23 19:55:37.903939 kubelet[2505]: I0123 19:55:37.903253 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.917657 kubelet[2505]: E0123 19:55:37.917577 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.917902 kubelet[2505]: I0123 19:55:37.917878 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.923851 kubelet[2505]: E0123 19:55:37.922183 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.924009 kubelet[2505]: I0123 19:55:37.923986 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:37.930169 kubelet[2505]: E0123 19:55:37.930123 2505 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hs5p8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:38.076087 kubelet[2505]: I0123 19:55:38.071423 2505 apiserver.go:52] "Watching apiserver" Jan 23 19:55:38.103017 kubelet[2505]: I0123 19:55:38.102906 2505 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:55:38.681819 kubelet[2505]: I0123 19:55:38.681375 2505 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:38.690214 kubelet[2505]: W0123 19:55:38.689845 2505 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 19:55:39.963245 systemd[1]: Reload requested from client PID 2783 ('systemctl') (unit session-11.scope)... Jan 23 19:55:39.964025 systemd[1]: Reloading... Jan 23 19:55:40.085992 zram_generator::config[2828]: No configuration found. Jan 23 19:55:40.481118 systemd[1]: Reloading finished in 516 ms. Jan 23 19:55:40.536394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:40.555788 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 19:55:40.556430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:40.556569 systemd[1]: kubelet.service: Consumed 1.294s CPU time, 130.4M memory peak. Jan 23 19:55:40.561764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 19:55:40.870316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 19:55:40.883458 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 19:55:40.959457 kubelet[2892]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:55:40.959457 kubelet[2892]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 19:55:40.959457 kubelet[2892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 19:55:40.960179 kubelet[2892]: I0123 19:55:40.959669 2892 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 19:55:40.973863 kubelet[2892]: I0123 19:55:40.973567 2892 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 19:55:40.973863 kubelet[2892]: I0123 19:55:40.973603 2892 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 19:55:40.974262 kubelet[2892]: I0123 19:55:40.974123 2892 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 19:55:40.976147 kubelet[2892]: I0123 19:55:40.976023 2892 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 19:55:40.981953 kubelet[2892]: I0123 19:55:40.980609 2892 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 19:55:40.990100 kubelet[2892]: I0123 19:55:40.990068 2892 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 19:55:41.003262 kubelet[2892]: I0123 19:55:41.002948 2892 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 19:55:41.003355 kubelet[2892]: I0123 19:55:41.003283 2892 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 19:55:41.003702 kubelet[2892]: I0123 19:55:41.003319 2892 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hs5p8.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.003720 2892 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.003739 2892 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.003981 2892 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.004312 2892 kubelet.go:446] "Attempting to sync node with API server" Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.004356 2892 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.004385 2892 kubelet.go:352] "Adding apiserver pod source" Jan 23 19:55:41.005086 kubelet[2892]: I0123 19:55:41.004403 2892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 19:55:41.014915 kubelet[2892]: I0123 19:55:41.012680 2892 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 19:55:41.014915 kubelet[2892]: I0123 19:55:41.013222 2892 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 19:55:41.017499 kubelet[2892]: I0123 19:55:41.016418 2892 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 19:55:41.017499 kubelet[2892]: I0123 19:55:41.016488 2892 server.go:1287] "Started kubelet" Jan 23 19:55:41.023529 kubelet[2892]: I0123 19:55:41.022741 2892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 19:55:41.035930 kubelet[2892]: I0123 19:55:41.035869 2892 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 19:55:41.039491 kubelet[2892]: I0123 19:55:41.037262 2892 server.go:479] "Adding debug handlers to kubelet server" Jan 23 19:55:41.039491 kubelet[2892]: I0123 19:55:41.039264 2892 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 19:55:41.039949 kubelet[2892]: I0123 19:55:41.039677 2892 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 19:55:41.040856 kubelet[2892]: I0123 19:55:41.040158 2892 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 19:55:41.048876 kubelet[2892]: I0123 19:55:41.043668 2892 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 19:55:41.048876 kubelet[2892]: E0123 19:55:41.043958 2892 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hs5p8.gb1.brightbox.com\" not found" Jan 23 19:55:41.048876 kubelet[2892]: I0123 19:55:41.046198 2892 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 19:55:41.048876 kubelet[2892]: I0123 19:55:41.046416 2892 reconciler.go:26] "Reconciler: start to sync state" Jan 23 19:55:41.058174 kubelet[2892]: I0123 19:55:41.057905 2892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 19:55:41.061998 kubelet[2892]: I0123 19:55:41.061972 2892 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 19:55:41.062139 kubelet[2892]: I0123 19:55:41.062121 2892 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 19:55:41.062626 kubelet[2892]: I0123 19:55:41.062241 2892 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 19:55:41.062626 kubelet[2892]: I0123 19:55:41.062291 2892 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 19:55:41.062626 kubelet[2892]: E0123 19:55:41.062357 2892 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 19:55:41.075359 kubelet[2892]: I0123 19:55:41.075320 2892 factory.go:221] Registration of the containerd container factory successfully Jan 23 19:55:41.075359 kubelet[2892]: I0123 19:55:41.075349 2892 factory.go:221] Registration of the systemd container factory successfully Jan 23 19:55:41.075547 kubelet[2892]: I0123 19:55:41.075492 2892 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 19:55:41.080775 kubelet[2892]: E0123 19:55:41.080122 2892 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 19:55:41.164445 kubelet[2892]: E0123 19:55:41.162744 2892 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179508 2892 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179537 2892 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179574 2892 state_mem.go:36] "Initialized new in-memory state store" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179886 2892 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179905 2892 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179936 2892 policy_none.go:49] "None policy: Start" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179959 2892 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.179984 2892 state_mem.go:35] "Initializing new in-memory state store" Jan 23 19:55:41.180324 kubelet[2892]: I0123 19:55:41.180168 2892 state_mem.go:75] "Updated machine memory state" Jan 23 19:55:41.189902 kubelet[2892]: I0123 19:55:41.189346 2892 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 19:55:41.191705 kubelet[2892]: I0123 19:55:41.191266 2892 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 19:55:41.191705 kubelet[2892]: I0123 19:55:41.191313 2892 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 19:55:41.191910 kubelet[2892]: I0123 19:55:41.191718 2892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 19:55:41.197835 kubelet[2892]: E0123 19:55:41.197803 2892 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 19:55:41.324676 kubelet[2892]: I0123 19:55:41.324276 2892 kubelet_node_status.go:75] "Attempting to register node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.337476 kubelet[2892]: I0123 19:55:41.337236 2892 kubelet_node_status.go:124] "Node was previously registered" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.337476 kubelet[2892]: I0123 19:55:41.337355 2892 kubelet_node_status.go:78] "Successfully registered node" node="srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.364369 kubelet[2892]: I0123 19:55:41.364326 2892 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.371110 kubelet[2892]: I0123 19:55:41.371072 2892 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.371980 kubelet[2892]: I0123 19:55:41.371888 2892 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.383853 kubelet[2892]: W0123 19:55:41.382635 2892 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 19:55:41.383853 kubelet[2892]: E0123 19:55:41.382752 2892 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hs5p8.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.385425 kubelet[2892]: W0123 19:55:41.385396 2892 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 19:55:41.388953 kubelet[2892]: W0123 19:55:41.388932 2892 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 19:55:41.449296 kubelet[2892]: I0123 19:55:41.449078 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-k8s-certs\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.449569 kubelet[2892]: I0123 19:55:41.449542 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a20d4c1580f5afb4ec4d320b2c722ea3-kubeconfig\") pod \"kube-scheduler-srv-hs5p8.gb1.brightbox.com\" (UID: \"a20d4c1580f5afb4ec4d320b2c722ea3\") " pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.449736 kubelet[2892]: I0123 19:55:41.449714 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3bed4a4577bc711f6d193deed0c5e2a3-ca-certs\") pod \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" (UID: \"3bed4a4577bc711f6d193deed0c5e2a3\") " pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.450017 kubelet[2892]: I0123 19:55:41.449948 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3bed4a4577bc711f6d193deed0c5e2a3-k8s-certs\") pod \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" (UID: \"3bed4a4577bc711f6d193deed0c5e2a3\") " pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.450394 kubelet[2892]: I0123 19:55:41.450357 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3bed4a4577bc711f6d193deed0c5e2a3-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hs5p8.gb1.brightbox.com\" (UID: \"3bed4a4577bc711f6d193deed0c5e2a3\") " pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.450595 kubelet[2892]: I0123 19:55:41.450560 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-ca-certs\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.450932 kubelet[2892]: I0123 19:55:41.450908 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-flexvolume-dir\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.451130 kubelet[2892]: I0123 19:55:41.451080 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-kubeconfig\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:41.451269 kubelet[2892]: I0123 19:55:41.451243 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eebba25d2e16863884a10abe583725fd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hs5p8.gb1.brightbox.com\" (UID: \"eebba25d2e16863884a10abe583725fd\") " pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:42.008098 kubelet[2892]: I0123 19:55:42.008045 2892 apiserver.go:52] "Watching apiserver" Jan 23 19:55:42.046489 kubelet[2892]: I0123 19:55:42.046400 2892 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 19:55:42.132758 kubelet[2892]: I0123 19:55:42.132717 2892 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:42.143535 kubelet[2892]: W0123 19:55:42.143491 2892 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 23 19:55:42.143966 kubelet[2892]: E0123 19:55:42.143574 2892 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hs5p8.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" Jan 23 19:55:42.218406 kubelet[2892]: I0123 19:55:42.218202 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-hs5p8.gb1.brightbox.com" podStartSLOduration=4.218141597 podStartE2EDuration="4.218141597s" podCreationTimestamp="2026-01-23 19:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:55:42.190795168 +0000 UTC m=+1.298364852" watchObservedRunningTime="2026-01-23 19:55:42.218141597 +0000 UTC m=+1.325711263" Jan 23 19:55:42.221129 kubelet[2892]: I0123 19:55:42.220989 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-hs5p8.gb1.brightbox.com" podStartSLOduration=1.220978509 podStartE2EDuration="1.220978509s" podCreationTimestamp="2026-01-23 19:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:55:42.218109331 +0000 UTC m=+1.325679015" watchObservedRunningTime="2026-01-23 19:55:42.220978509 +0000 UTC m=+1.328548195" Jan 23 19:55:46.305197 kubelet[2892]: I0123 19:55:46.305116 2892 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 19:55:46.308455 kubelet[2892]: I0123 19:55:46.307954 2892 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 19:55:46.308573 containerd[1583]: time="2026-01-23T19:55:46.307165842Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 19:55:46.971781 kubelet[2892]: I0123 19:55:46.971664 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-hs5p8.gb1.brightbox.com" podStartSLOduration=5.971633461 podStartE2EDuration="5.971633461s" podCreationTimestamp="2026-01-23 19:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:55:42.258132757 +0000 UTC m=+1.365702440" watchObservedRunningTime="2026-01-23 19:55:46.971633461 +0000 UTC m=+6.079203167" Jan 23 19:55:47.343624 systemd[1]: Created slice kubepods-besteffort-pode7584af5_0ec2_4129_a23d_0d21024d7086.slice - libcontainer container kubepods-besteffort-pode7584af5_0ec2_4129_a23d_0d21024d7086.slice. Jan 23 19:55:47.389899 kubelet[2892]: I0123 19:55:47.389829 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c9xz\" (UniqueName: \"kubernetes.io/projected/e7584af5-0ec2-4129-a23d-0d21024d7086-kube-api-access-9c9xz\") pod \"kube-proxy-pbq5b\" (UID: \"e7584af5-0ec2-4129-a23d-0d21024d7086\") " pod="kube-system/kube-proxy-pbq5b" Jan 23 19:55:47.390667 kubelet[2892]: I0123 19:55:47.389920 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7584af5-0ec2-4129-a23d-0d21024d7086-lib-modules\") pod \"kube-proxy-pbq5b\" (UID: \"e7584af5-0ec2-4129-a23d-0d21024d7086\") " pod="kube-system/kube-proxy-pbq5b" Jan 23 19:55:47.390667 kubelet[2892]: I0123 19:55:47.390005 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e7584af5-0ec2-4129-a23d-0d21024d7086-kube-proxy\") pod \"kube-proxy-pbq5b\" (UID: \"e7584af5-0ec2-4129-a23d-0d21024d7086\") " pod="kube-system/kube-proxy-pbq5b" Jan 23 19:55:47.390667 kubelet[2892]: I0123 19:55:47.390034 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7584af5-0ec2-4129-a23d-0d21024d7086-xtables-lock\") pod \"kube-proxy-pbq5b\" (UID: \"e7584af5-0ec2-4129-a23d-0d21024d7086\") " pod="kube-system/kube-proxy-pbq5b" Jan 23 19:55:47.444234 kubelet[2892]: W0123 19:55:47.444188 2892 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:srv-hs5p8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-hs5p8.gb1.brightbox.com' and this object Jan 23 19:55:47.446972 kubelet[2892]: E0123 19:55:47.444250 2892 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-hs5p8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-hs5p8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 23 19:55:47.446383 systemd[1]: Created slice kubepods-besteffort-pod7034c491_62b3_47b6_9054_b3860c222cf5.slice - libcontainer container kubepods-besteffort-pod7034c491_62b3_47b6_9054_b3860c222cf5.slice. Jan 23 19:55:47.449499 kubelet[2892]: W0123 19:55:47.449257 2892 reflector.go:569] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-hs5p8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'srv-hs5p8.gb1.brightbox.com' and this object Jan 23 19:55:47.449499 kubelet[2892]: E0123 19:55:47.449324 2892 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-hs5p8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-hs5p8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 23 19:55:47.491120 kubelet[2892]: I0123 19:55:47.491063 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7034c491-62b3-47b6-9054-b3860c222cf5-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ld7vp\" (UID: \"7034c491-62b3-47b6-9054-b3860c222cf5\") " pod="tigera-operator/tigera-operator-7dcd859c48-ld7vp" Jan 23 19:55:47.491420 kubelet[2892]: I0123 19:55:47.491171 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx46b\" (UniqueName: \"kubernetes.io/projected/7034c491-62b3-47b6-9054-b3860c222cf5-kube-api-access-lx46b\") pod \"tigera-operator-7dcd859c48-ld7vp\" (UID: \"7034c491-62b3-47b6-9054-b3860c222cf5\") " pod="tigera-operator/tigera-operator-7dcd859c48-ld7vp" Jan 23 19:55:47.655854 containerd[1583]: time="2026-01-23T19:55:47.655646355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbq5b,Uid:e7584af5-0ec2-4129-a23d-0d21024d7086,Namespace:kube-system,Attempt:0,}" Jan 23 19:55:47.688513 containerd[1583]: time="2026-01-23T19:55:47.688454629Z" level=info msg="connecting to shim 49711a75df185d9cc0034faf413e82d30db3775cac8c0671482a0a4f426bcf94" address="unix:///run/containerd/s/caadbbbbaae3eed72f5fe505d31ac80b0b6cca6128fe7d4b3d1f7ed1cda65ae7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:55:47.741030 systemd[1]: Started cri-containerd-49711a75df185d9cc0034faf413e82d30db3775cac8c0671482a0a4f426bcf94.scope - libcontainer container 49711a75df185d9cc0034faf413e82d30db3775cac8c0671482a0a4f426bcf94. Jan 23 19:55:47.795221 containerd[1583]: time="2026-01-23T19:55:47.795164493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pbq5b,Uid:e7584af5-0ec2-4129-a23d-0d21024d7086,Namespace:kube-system,Attempt:0,} returns sandbox id \"49711a75df185d9cc0034faf413e82d30db3775cac8c0671482a0a4f426bcf94\"" Jan 23 19:55:47.800024 containerd[1583]: time="2026-01-23T19:55:47.799984841Z" level=info msg="CreateContainer within sandbox \"49711a75df185d9cc0034faf413e82d30db3775cac8c0671482a0a4f426bcf94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 19:55:47.832063 containerd[1583]: time="2026-01-23T19:55:47.831990227Z" level=info msg="Container 1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:55:47.842440 containerd[1583]: time="2026-01-23T19:55:47.842279465Z" level=info msg="CreateContainer within sandbox \"49711a75df185d9cc0034faf413e82d30db3775cac8c0671482a0a4f426bcf94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a\"" Jan 23 19:55:47.844248 containerd[1583]: time="2026-01-23T19:55:47.844153743Z" level=info msg="StartContainer for \"1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a\"" Jan 23 19:55:47.847010 containerd[1583]: time="2026-01-23T19:55:47.846912295Z" level=info msg="connecting to shim 1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a" address="unix:///run/containerd/s/caadbbbbaae3eed72f5fe505d31ac80b0b6cca6128fe7d4b3d1f7ed1cda65ae7" protocol=ttrpc version=3 Jan 23 19:55:47.885065 systemd[1]: Started cri-containerd-1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a.scope - libcontainer container 1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a. Jan 23 19:55:47.995637 containerd[1583]: time="2026-01-23T19:55:47.995463176Z" level=info msg="StartContainer for \"1510191b5aeceb3c7408ecb917fcf43acb8e27206c012beee3b7dee4bb5f443a\" returns successfully" Jan 23 19:55:48.168788 kubelet[2892]: I0123 19:55:48.168614 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pbq5b" podStartSLOduration=1.168596158 podStartE2EDuration="1.168596158s" podCreationTimestamp="2026-01-23 19:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:55:48.167770935 +0000 UTC m=+7.275340619" watchObservedRunningTime="2026-01-23 19:55:48.168596158 +0000 UTC m=+7.276165842" Jan 23 19:55:48.601869 kubelet[2892]: E0123 19:55:48.601561 2892 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 19:55:48.602500 kubelet[2892]: E0123 19:55:48.601895 2892 projected.go:194] Error preparing data for projected volume kube-api-access-lx46b for pod tigera-operator/tigera-operator-7dcd859c48-ld7vp: failed to sync configmap cache: timed out waiting for the condition Jan 23 19:55:48.602500 kubelet[2892]: E0123 19:55:48.602067 2892 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7034c491-62b3-47b6-9054-b3860c222cf5-kube-api-access-lx46b podName:7034c491-62b3-47b6-9054-b3860c222cf5 nodeName:}" failed. No retries permitted until 2026-01-23 19:55:49.102013079 +0000 UTC m=+8.209582754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lx46b" (UniqueName: "kubernetes.io/projected/7034c491-62b3-47b6-9054-b3860c222cf5-kube-api-access-lx46b") pod "tigera-operator-7dcd859c48-ld7vp" (UID: "7034c491-62b3-47b6-9054-b3860c222cf5") : failed to sync configmap cache: timed out waiting for the condition Jan 23 19:55:49.256998 containerd[1583]: time="2026-01-23T19:55:49.256872296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ld7vp,Uid:7034c491-62b3-47b6-9054-b3860c222cf5,Namespace:tigera-operator,Attempt:0,}" Jan 23 19:55:49.282075 containerd[1583]: time="2026-01-23T19:55:49.281876726Z" level=info msg="connecting to shim 110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038" address="unix:///run/containerd/s/4812d2ebe7a9491c96b49f2f9f6fceb647cae843297270a89c3533d10a8187ec" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:55:49.325075 systemd[1]: Started cri-containerd-110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038.scope - libcontainer container 110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038. Jan 23 19:55:49.410636 containerd[1583]: time="2026-01-23T19:55:49.410570710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ld7vp,Uid:7034c491-62b3-47b6-9054-b3860c222cf5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038\"" Jan 23 19:55:49.415337 containerd[1583]: time="2026-01-23T19:55:49.415195878Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 19:55:51.329299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1265971431.mount: Deactivated successfully. Jan 23 19:55:52.371152 containerd[1583]: time="2026-01-23T19:55:52.371042654Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:52.372985 containerd[1583]: time="2026-01-23T19:55:52.372942055Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 19:55:52.373946 containerd[1583]: time="2026-01-23T19:55:52.373887236Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:52.376464 containerd[1583]: time="2026-01-23T19:55:52.376404337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:55:52.377841 containerd[1583]: time="2026-01-23T19:55:52.377472578Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.962186011s" Jan 23 19:55:52.377841 containerd[1583]: time="2026-01-23T19:55:52.377526803Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 19:55:52.381596 containerd[1583]: time="2026-01-23T19:55:52.381563245Z" level=info msg="CreateContainer within sandbox \"110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 19:55:52.391503 containerd[1583]: time="2026-01-23T19:55:52.391459495Z" level=info msg="Container 270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:55:52.397526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141752200.mount: Deactivated successfully. Jan 23 19:55:52.411272 containerd[1583]: time="2026-01-23T19:55:52.411198867Z" level=info msg="CreateContainer within sandbox \"110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123\"" Jan 23 19:55:52.413134 containerd[1583]: time="2026-01-23T19:55:52.413073004Z" level=info msg="StartContainer for \"270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123\"" Jan 23 19:55:52.414887 containerd[1583]: time="2026-01-23T19:55:52.414803519Z" level=info msg="connecting to shim 270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123" address="unix:///run/containerd/s/4812d2ebe7a9491c96b49f2f9f6fceb647cae843297270a89c3533d10a8187ec" protocol=ttrpc version=3 Jan 23 19:55:52.461118 systemd[1]: Started cri-containerd-270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123.scope - libcontainer container 270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123. Jan 23 19:55:52.595621 containerd[1583]: time="2026-01-23T19:55:52.595529541Z" level=info msg="StartContainer for \"270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123\" returns successfully" Jan 23 19:55:56.344730 systemd[1]: cri-containerd-270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123.scope: Deactivated successfully. Jan 23 19:55:56.451834 containerd[1583]: time="2026-01-23T19:55:56.451068846Z" level=info msg="received container exit event container_id:\"270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123\" id:\"270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123\" pid:3212 exit_status:1 exited_at:{seconds:1769198156 nanos:356184902}" Jan 23 19:55:56.520713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123-rootfs.mount: Deactivated successfully. Jan 23 19:55:57.187343 kubelet[2892]: I0123 19:55:57.187290 2892 scope.go:117] "RemoveContainer" containerID="270a2033fca8ae2b201c70732b902505f27505dee18dbf3a44f3591ce7b03123" Jan 23 19:55:57.304831 containerd[1583]: time="2026-01-23T19:55:57.304586293Z" level=info msg="CreateContainer within sandbox \"110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 19:55:57.318190 containerd[1583]: time="2026-01-23T19:55:57.317422204Z" level=info msg="Container e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:55:57.353005 containerd[1583]: time="2026-01-23T19:55:57.352923873Z" level=info msg="CreateContainer within sandbox \"110e12780afcb11b973b610c61fbc7be788e705209b9260834b4274b98bd6038\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb\"" Jan 23 19:55:57.356645 containerd[1583]: time="2026-01-23T19:55:57.356571612Z" level=info msg="StartContainer for \"e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb\"" Jan 23 19:55:57.359469 containerd[1583]: time="2026-01-23T19:55:57.359408315Z" level=info msg="connecting to shim e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb" address="unix:///run/containerd/s/4812d2ebe7a9491c96b49f2f9f6fceb647cae843297270a89c3533d10a8187ec" protocol=ttrpc version=3 Jan 23 19:55:57.408284 systemd[1]: Started cri-containerd-e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb.scope - libcontainer container e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb. Jan 23 19:55:57.548972 containerd[1583]: time="2026-01-23T19:55:57.548396800Z" level=info msg="StartContainer for \"e86c18b73df869706e55e2be727f62acdaccbcff88a9afdaf70cf37f5687d3eb\" returns successfully" Jan 23 19:55:58.210385 kubelet[2892]: I0123 19:55:58.210289 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ld7vp" podStartSLOduration=8.243681634 podStartE2EDuration="11.210260361s" podCreationTimestamp="2026-01-23 19:55:47 +0000 UTC" firstStartedPulling="2026-01-23 19:55:49.412678563 +0000 UTC m=+8.520248226" lastFinishedPulling="2026-01-23 19:55:52.379257274 +0000 UTC m=+11.486826953" observedRunningTime="2026-01-23 19:55:53.184381809 +0000 UTC m=+12.291951492" watchObservedRunningTime="2026-01-23 19:55:58.210260361 +0000 UTC m=+17.317830035" Jan 23 19:56:00.143517 sudo[1885]: pam_unix(sudo:session): session closed for user root Jan 23 19:56:00.235911 sshd[1884]: Connection closed by 68.220.241.50 port 53200 Jan 23 19:56:00.237850 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Jan 23 19:56:00.249172 systemd[1]: sshd@8-10.230.78.134:22-68.220.241.50:53200.service: Deactivated successfully. Jan 23 19:56:00.255094 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 19:56:00.255606 systemd[1]: session-11.scope: Consumed 8.118s CPU time, 158.9M memory peak. Jan 23 19:56:00.259866 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 23 19:56:00.262994 systemd-logind[1561]: Removed session 11. Jan 23 19:56:08.999570 systemd[1]: Created slice kubepods-besteffort-pod470be53d_3ad8_433a_960f_6a1243497067.slice - libcontainer container kubepods-besteffort-pod470be53d_3ad8_433a_960f_6a1243497067.slice. Jan 23 19:56:09.046121 kubelet[2892]: I0123 19:56:09.046045 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/470be53d-3ad8-433a-960f-6a1243497067-typha-certs\") pod \"calico-typha-6d64b97848-m6stp\" (UID: \"470be53d-3ad8-433a-960f-6a1243497067\") " pod="calico-system/calico-typha-6d64b97848-m6stp" Jan 23 19:56:09.046121 kubelet[2892]: I0123 19:56:09.046122 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/470be53d-3ad8-433a-960f-6a1243497067-tigera-ca-bundle\") pod \"calico-typha-6d64b97848-m6stp\" (UID: \"470be53d-3ad8-433a-960f-6a1243497067\") " pod="calico-system/calico-typha-6d64b97848-m6stp" Jan 23 19:56:09.046121 kubelet[2892]: I0123 19:56:09.046163 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvznh\" (UniqueName: \"kubernetes.io/projected/470be53d-3ad8-433a-960f-6a1243497067-kube-api-access-zvznh\") pod \"calico-typha-6d64b97848-m6stp\" (UID: \"470be53d-3ad8-433a-960f-6a1243497067\") " pod="calico-system/calico-typha-6d64b97848-m6stp" Jan 23 19:56:09.270876 systemd[1]: Created slice kubepods-besteffort-podd29a2de4_d7db_4b40_bed8_21d022542197.slice - libcontainer container kubepods-besteffort-podd29a2de4_d7db_4b40_bed8_21d022542197.slice. Jan 23 19:56:09.306783 containerd[1583]: time="2026-01-23T19:56:09.306717856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d64b97848-m6stp,Uid:470be53d-3ad8-433a-960f-6a1243497067,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:09.348665 kubelet[2892]: I0123 19:56:09.348570 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-cni-net-dir\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.348665 kubelet[2892]: I0123 19:56:09.348647 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-flexvol-driver-host\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.349170 kubelet[2892]: I0123 19:56:09.348681 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-var-run-calico\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.349170 kubelet[2892]: I0123 19:56:09.348708 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-lib-modules\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.349170 kubelet[2892]: I0123 19:56:09.348755 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d29a2de4-d7db-4b40-bed8-21d022542197-node-certs\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.349170 kubelet[2892]: I0123 19:56:09.348787 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-cni-log-dir\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.350079 kubelet[2892]: I0123 19:56:09.349089 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d29a2de4-d7db-4b40-bed8-21d022542197-tigera-ca-bundle\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.350079 kubelet[2892]: I0123 19:56:09.349673 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-var-lib-calico\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.350079 kubelet[2892]: I0123 19:56:09.349769 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-xtables-lock\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.350079 kubelet[2892]: I0123 19:56:09.349901 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-policysync\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.350333 kubelet[2892]: I0123 19:56:09.350085 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d29a2de4-d7db-4b40-bed8-21d022542197-cni-bin-dir\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.350598 kubelet[2892]: I0123 19:56:09.350439 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2z6\" (UniqueName: \"kubernetes.io/projected/d29a2de4-d7db-4b40-bed8-21d022542197-kube-api-access-2b2z6\") pod \"calico-node-s99m8\" (UID: \"d29a2de4-d7db-4b40-bed8-21d022542197\") " pod="calico-system/calico-node-s99m8" Jan 23 19:56:09.359916 containerd[1583]: time="2026-01-23T19:56:09.359839884Z" level=info msg="connecting to shim 66b0e3dcd22949373723f1305e3caa238b1c62de5ad28f19396b0692f70cefa8" address="unix:///run/containerd/s/ef6e78d68f4099c10fd3963713352fef0b4c5071b115031d8e401c1a9dc91b9c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:09.426140 systemd[1]: Started cri-containerd-66b0e3dcd22949373723f1305e3caa238b1c62de5ad28f19396b0692f70cefa8.scope - libcontainer container 66b0e3dcd22949373723f1305e3caa238b1c62de5ad28f19396b0692f70cefa8. Jan 23 19:56:09.448493 kubelet[2892]: E0123 19:56:09.448284 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:09.454199 kubelet[2892]: E0123 19:56:09.453948 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.454199 kubelet[2892]: W0123 19:56:09.453983 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.455048 kubelet[2892]: E0123 19:56:09.454916 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.455297 kubelet[2892]: E0123 19:56:09.455277 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.455734 kubelet[2892]: W0123 19:56:09.455708 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.455926 kubelet[2892]: E0123 19:56:09.455892 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.457001 kubelet[2892]: E0123 19:56:09.456980 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.457229 kubelet[2892]: W0123 19:56:09.457110 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.457626 kubelet[2892]: E0123 19:56:09.457451 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.457626 kubelet[2892]: W0123 19:56:09.457464 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.457626 kubelet[2892]: E0123 19:56:09.457610 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.457916 kubelet[2892]: E0123 19:56:09.457640 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.458186 kubelet[2892]: E0123 19:56:09.458077 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.458186 kubelet[2892]: W0123 19:56:09.458120 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.458186 kubelet[2892]: E0123 19:56:09.458170 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.459123 kubelet[2892]: E0123 19:56:09.458989 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.459123 kubelet[2892]: W0123 19:56:09.459009 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.459318 kubelet[2892]: E0123 19:56:09.459044 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.459771 kubelet[2892]: E0123 19:56:09.459462 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.459938 kubelet[2892]: W0123 19:56:09.459914 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.460084 kubelet[2892]: E0123 19:56:09.460056 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.461010 kubelet[2892]: E0123 19:56:09.460989 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.461162 kubelet[2892]: W0123 19:56:09.461140 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.461487 kubelet[2892]: E0123 19:56:09.461403 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.461762 kubelet[2892]: E0123 19:56:09.461743 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.461904 kubelet[2892]: W0123 19:56:09.461884 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.462036 kubelet[2892]: E0123 19:56:09.462004 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.462897 kubelet[2892]: E0123 19:56:09.462877 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.463034 kubelet[2892]: W0123 19:56:09.463012 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.463252 kubelet[2892]: E0123 19:56:09.463184 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.463936 kubelet[2892]: E0123 19:56:09.463915 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.465688 kubelet[2892]: W0123 19:56:09.465656 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.465903 kubelet[2892]: E0123 19:56:09.465870 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.466273 kubelet[2892]: E0123 19:56:09.466253 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.466580 kubelet[2892]: W0123 19:56:09.466430 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.466782 kubelet[2892]: E0123 19:56:09.466763 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.466962 kubelet[2892]: W0123 19:56:09.466937 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.467303 kubelet[2892]: E0123 19:56:09.467284 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.467417 kubelet[2892]: W0123 19:56:09.467396 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.467527 kubelet[2892]: E0123 19:56:09.467501 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.467629 kubelet[2892]: E0123 19:56:09.467606 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.467892 kubelet[2892]: E0123 19:56:09.467509 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.469035 kubelet[2892]: E0123 19:56:09.469011 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.469035 kubelet[2892]: W0123 19:56:09.469032 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.469189 kubelet[2892]: E0123 19:56:09.469073 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.469468 kubelet[2892]: E0123 19:56:09.469443 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.469538 kubelet[2892]: W0123 19:56:09.469484 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.469538 kubelet[2892]: E0123 19:56:09.469504 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.486975 kubelet[2892]: E0123 19:56:09.486925 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.486975 kubelet[2892]: W0123 19:56:09.486967 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.487218 kubelet[2892]: E0123 19:56:09.487000 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.492830 kubelet[2892]: E0123 19:56:09.491017 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.492830 kubelet[2892]: W0123 19:56:09.491043 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.492830 kubelet[2892]: E0123 19:56:09.491064 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.501711 kubelet[2892]: E0123 19:56:09.501671 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.501711 kubelet[2892]: W0123 19:56:09.501703 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.501967 kubelet[2892]: E0123 19:56:09.501731 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.530159 kubelet[2892]: E0123 19:56:09.529894 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.530159 kubelet[2892]: W0123 19:56:09.529933 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.530159 kubelet[2892]: E0123 19:56:09.529970 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.530511 kubelet[2892]: E0123 19:56:09.530250 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.530511 kubelet[2892]: W0123 19:56:09.530265 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.530511 kubelet[2892]: E0123 19:56:09.530279 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.530630 kubelet[2892]: E0123 19:56:09.530534 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.530630 kubelet[2892]: W0123 19:56:09.530547 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.530630 kubelet[2892]: E0123 19:56:09.530562 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.531105 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.532501 kubelet[2892]: W0123 19:56:09.531125 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.531141 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.531549 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.532501 kubelet[2892]: W0123 19:56:09.531572 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.531879 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.532129 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.532501 kubelet[2892]: W0123 19:56:09.532142 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.532156 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.532501 kubelet[2892]: E0123 19:56:09.532409 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.533040 kubelet[2892]: W0123 19:56:09.532422 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.533040 kubelet[2892]: E0123 19:56:09.532437 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.533040 kubelet[2892]: E0123 19:56:09.533032 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.533232 kubelet[2892]: W0123 19:56:09.533046 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.533232 kubelet[2892]: E0123 19:56:09.533061 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.533319 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.535692 kubelet[2892]: W0123 19:56:09.533338 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.533353 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.533600 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.535692 kubelet[2892]: W0123 19:56:09.533614 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.533642 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.533997 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.535692 kubelet[2892]: W0123 19:56:09.534040 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.534057 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.535692 kubelet[2892]: E0123 19:56:09.534434 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.537876 kubelet[2892]: W0123 19:56:09.534463 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.537876 kubelet[2892]: E0123 19:56:09.534488 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.537876 kubelet[2892]: E0123 19:56:09.535275 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.537876 kubelet[2892]: W0123 19:56:09.535291 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.537876 kubelet[2892]: E0123 19:56:09.535306 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.537876 kubelet[2892]: E0123 19:56:09.535561 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.537876 kubelet[2892]: W0123 19:56:09.535575 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.537876 kubelet[2892]: E0123 19:56:09.535626 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.537876 kubelet[2892]: E0123 19:56:09.535894 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.537876 kubelet[2892]: W0123 19:56:09.535908 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.535922 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.536189 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.538426 kubelet[2892]: W0123 19:56:09.536203 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.536217 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.536755 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.538426 kubelet[2892]: W0123 19:56:09.536768 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.536783 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.537031 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.538426 kubelet[2892]: W0123 19:56:09.537044 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.538426 kubelet[2892]: E0123 19:56:09.537058 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.541778 kubelet[2892]: E0123 19:56:09.537685 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.541778 kubelet[2892]: W0123 19:56:09.537699 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.541778 kubelet[2892]: E0123 19:56:09.537714 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.541778 kubelet[2892]: E0123 19:56:09.537995 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.541778 kubelet[2892]: W0123 19:56:09.538008 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.541778 kubelet[2892]: E0123 19:56:09.538022 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.555000 kubelet[2892]: E0123 19:56:09.554918 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.555000 kubelet[2892]: W0123 19:56:09.554963 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.555000 kubelet[2892]: E0123 19:56:09.554995 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.555421 kubelet[2892]: I0123 19:56:09.555031 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/981744d6-418c-41e4-8d22-4fb530fbf1db-kubelet-dir\") pod \"csi-node-driver-4gplb\" (UID: \"981744d6-418c-41e4-8d22-4fb530fbf1db\") " pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:09.556072 kubelet[2892]: E0123 19:56:09.555995 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.556072 kubelet[2892]: W0123 19:56:09.556018 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.556072 kubelet[2892]: E0123 19:56:09.556036 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.556072 kubelet[2892]: I0123 19:56:09.556063 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/981744d6-418c-41e4-8d22-4fb530fbf1db-varrun\") pod \"csi-node-driver-4gplb\" (UID: \"981744d6-418c-41e4-8d22-4fb530fbf1db\") " pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:09.556746 kubelet[2892]: E0123 19:56:09.556718 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.556746 kubelet[2892]: W0123 19:56:09.556744 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.556882 kubelet[2892]: E0123 19:56:09.556761 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.556882 kubelet[2892]: I0123 19:56:09.556784 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qqb\" (UniqueName: \"kubernetes.io/projected/981744d6-418c-41e4-8d22-4fb530fbf1db-kube-api-access-p9qqb\") pod \"csi-node-driver-4gplb\" (UID: \"981744d6-418c-41e4-8d22-4fb530fbf1db\") " pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:09.557532 kubelet[2892]: E0123 19:56:09.557492 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.557532 kubelet[2892]: W0123 19:56:09.557513 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.557673 kubelet[2892]: E0123 19:56:09.557558 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.557990 kubelet[2892]: I0123 19:56:09.557587 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/981744d6-418c-41e4-8d22-4fb530fbf1db-registration-dir\") pod \"csi-node-driver-4gplb\" (UID: \"981744d6-418c-41e4-8d22-4fb530fbf1db\") " pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:09.558856 kubelet[2892]: E0123 19:56:09.557904 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.558856 kubelet[2892]: W0123 19:56:09.558855 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.559289 kubelet[2892]: E0123 19:56:09.558889 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.559289 kubelet[2892]: E0123 19:56:09.559217 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.559289 kubelet[2892]: W0123 19:56:09.559241 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.559452 kubelet[2892]: E0123 19:56:09.559334 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.559913 kubelet[2892]: E0123 19:56:09.559891 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.559913 kubelet[2892]: W0123 19:56:09.559911 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.560597 kubelet[2892]: E0123 19:56:09.560133 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.560597 kubelet[2892]: E0123 19:56:09.560224 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.560597 kubelet[2892]: W0123 19:56:09.560236 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.560597 kubelet[2892]: E0123 19:56:09.560325 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.560597 kubelet[2892]: E0123 19:56:09.560536 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.560597 kubelet[2892]: W0123 19:56:09.560548 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.561110 kubelet[2892]: E0123 19:56:09.560641 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.561110 kubelet[2892]: E0123 19:56:09.560975 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.561110 kubelet[2892]: W0123 19:56:09.560988 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.561110 kubelet[2892]: E0123 19:56:09.561091 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.561690 kubelet[2892]: I0123 19:56:09.561118 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/981744d6-418c-41e4-8d22-4fb530fbf1db-socket-dir\") pod \"csi-node-driver-4gplb\" (UID: \"981744d6-418c-41e4-8d22-4fb530fbf1db\") " pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:09.561690 kubelet[2892]: E0123 19:56:09.561445 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.561690 kubelet[2892]: W0123 19:56:09.561460 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.561690 kubelet[2892]: E0123 19:56:09.561493 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.562736 kubelet[2892]: E0123 19:56:09.562685 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.562736 kubelet[2892]: W0123 19:56:09.562706 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.562736 kubelet[2892]: E0123 19:56:09.562729 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.563086 kubelet[2892]: E0123 19:56:09.563060 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.563086 kubelet[2892]: W0123 19:56:09.563073 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.563223 kubelet[2892]: E0123 19:56:09.563088 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.564209 kubelet[2892]: E0123 19:56:09.563425 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.564209 kubelet[2892]: W0123 19:56:09.563444 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.564209 kubelet[2892]: E0123 19:56:09.563459 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.564209 kubelet[2892]: E0123 19:56:09.563732 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.564209 kubelet[2892]: W0123 19:56:09.563745 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.564209 kubelet[2892]: E0123 19:56:09.563762 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.585241 containerd[1583]: time="2026-01-23T19:56:09.585160186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s99m8,Uid:d29a2de4-d7db-4b40-bed8-21d022542197,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:09.634429 containerd[1583]: time="2026-01-23T19:56:09.634264323Z" level=info msg="connecting to shim 0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811" address="unix:///run/containerd/s/0e00e3e2b82a580d46f30b8d5a767bdb15ca12da908a14296907894af1791b0c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:09.666918 kubelet[2892]: E0123 19:56:09.666854 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.666918 kubelet[2892]: W0123 19:56:09.666897 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.666918 kubelet[2892]: E0123 19:56:09.666929 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.667705 kubelet[2892]: E0123 19:56:09.667302 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.667705 kubelet[2892]: W0123 19:56:09.667322 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.667705 kubelet[2892]: E0123 19:56:09.667356 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.667797 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.670335 kubelet[2892]: W0123 19:56:09.667871 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.667927 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.668409 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.670335 kubelet[2892]: W0123 19:56:09.668423 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.668456 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.668863 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.670335 kubelet[2892]: W0123 19:56:09.668878 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.668931 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.670335 kubelet[2892]: E0123 19:56:09.669250 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.670800 kubelet[2892]: W0123 19:56:09.669265 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.670800 kubelet[2892]: E0123 19:56:09.669280 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.670800 kubelet[2892]: E0123 19:56:09.669615 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.670800 kubelet[2892]: W0123 19:56:09.669629 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.670800 kubelet[2892]: E0123 19:56:09.669660 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.670800 kubelet[2892]: E0123 19:56:09.669961 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.670800 kubelet[2892]: W0123 19:56:09.669974 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.670800 kubelet[2892]: E0123 19:56:09.670005 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.675887 kubelet[2892]: E0123 19:56:09.675840 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.675887 kubelet[2892]: W0123 19:56:09.675862 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.676190 kubelet[2892]: E0123 19:56:09.676058 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.676602 kubelet[2892]: E0123 19:56:09.676579 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.676730 kubelet[2892]: W0123 19:56:09.676697 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.676909 kubelet[2892]: E0123 19:56:09.676886 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.677351 kubelet[2892]: E0123 19:56:09.677332 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.677745 kubelet[2892]: W0123 19:56:09.677537 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.678051 kubelet[2892]: E0123 19:56:09.677932 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.678195 kubelet[2892]: E0123 19:56:09.678178 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.678396 kubelet[2892]: W0123 19:56:09.678281 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.678396 kubelet[2892]: E0123 19:56:09.678316 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.679687 kubelet[2892]: E0123 19:56:09.679318 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.679687 kubelet[2892]: W0123 19:56:09.679353 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.679967 kubelet[2892]: E0123 19:56:09.679942 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.680628 kubelet[2892]: E0123 19:56:09.680531 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.680734 kubelet[2892]: W0123 19:56:09.680714 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.683776 kubelet[2892]: E0123 19:56:09.683634 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.683776 kubelet[2892]: W0123 19:56:09.683662 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.685303 kubelet[2892]: E0123 19:56:09.685282 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.685844 kubelet[2892]: W0123 19:56:09.685540 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.687716 kubelet[2892]: E0123 19:56:09.687489 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.687716 kubelet[2892]: W0123 19:56:09.687510 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.687716 kubelet[2892]: E0123 19:56:09.687541 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.690063 kubelet[2892]: E0123 19:56:09.690005 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.691117 kubelet[2892]: W0123 19:56:09.690170 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.691117 kubelet[2892]: E0123 19:56:09.690199 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.694260 kubelet[2892]: E0123 19:56:09.694228 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.695316 kubelet[2892]: E0123 19:56:09.694902 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.695683 kubelet[2892]: W0123 19:56:09.695657 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.696199 kubelet[2892]: E0123 19:56:09.695913 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.697429 kubelet[2892]: E0123 19:56:09.696908 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.697429 kubelet[2892]: W0123 19:56:09.696928 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.697429 kubelet[2892]: E0123 19:56:09.696945 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.698836 kubelet[2892]: E0123 19:56:09.698580 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.698836 kubelet[2892]: W0123 19:56:09.698600 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.698836 kubelet[2892]: E0123 19:56:09.698616 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.701064 kubelet[2892]: E0123 19:56:09.700894 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.703796 kubelet[2892]: E0123 19:56:09.703576 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.703796 kubelet[2892]: W0123 19:56:09.703598 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.703796 kubelet[2892]: E0123 19:56:09.703617 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.705569 kubelet[2892]: E0123 19:56:09.705382 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.706548 kubelet[2892]: E0123 19:56:09.706161 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.706548 kubelet[2892]: W0123 19:56:09.706183 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.706548 kubelet[2892]: E0123 19:56:09.706208 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.709834 kubelet[2892]: E0123 19:56:09.709436 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.709834 kubelet[2892]: W0123 19:56:09.709692 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.709834 kubelet[2892]: E0123 19:56:09.709725 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.712615 kubelet[2892]: E0123 19:56:09.712049 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.712615 kubelet[2892]: W0123 19:56:09.712081 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.712615 kubelet[2892]: E0123 19:56:09.712100 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.715262 systemd[1]: Started cri-containerd-0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811.scope - libcontainer container 0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811. Jan 23 19:56:09.730963 kubelet[2892]: E0123 19:56:09.730923 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:09.731303 kubelet[2892]: W0123 19:56:09.731261 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:09.731468 kubelet[2892]: E0123 19:56:09.731444 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:09.759647 containerd[1583]: time="2026-01-23T19:56:09.759488058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d64b97848-m6stp,Uid:470be53d-3ad8-433a-960f-6a1243497067,Namespace:calico-system,Attempt:0,} returns sandbox id \"66b0e3dcd22949373723f1305e3caa238b1c62de5ad28f19396b0692f70cefa8\"" Jan 23 19:56:09.765098 containerd[1583]: time="2026-01-23T19:56:09.765018345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 19:56:09.782193 containerd[1583]: time="2026-01-23T19:56:09.781418564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s99m8,Uid:d29a2de4-d7db-4b40-bed8-21d022542197,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\"" Jan 23 19:56:11.066126 kubelet[2892]: E0123 19:56:11.065978 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:11.330280 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005730252.mount: Deactivated successfully. Jan 23 19:56:13.065913 kubelet[2892]: E0123 19:56:13.064668 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:13.279343 containerd[1583]: time="2026-01-23T19:56:13.279254799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:13.280592 containerd[1583]: time="2026-01-23T19:56:13.280345279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 19:56:13.281416 containerd[1583]: time="2026-01-23T19:56:13.281346042Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:13.284328 containerd[1583]: time="2026-01-23T19:56:13.284278837Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:13.285467 containerd[1583]: time="2026-01-23T19:56:13.285433286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.52034335s" Jan 23 19:56:13.285683 containerd[1583]: time="2026-01-23T19:56:13.285598535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 19:56:13.289118 containerd[1583]: time="2026-01-23T19:56:13.289090121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 19:56:13.314214 containerd[1583]: time="2026-01-23T19:56:13.314151706Z" level=info msg="CreateContainer within sandbox \"66b0e3dcd22949373723f1305e3caa238b1c62de5ad28f19396b0692f70cefa8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 19:56:13.328954 containerd[1583]: time="2026-01-23T19:56:13.325725348Z" level=info msg="Container d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:56:13.342414 containerd[1583]: time="2026-01-23T19:56:13.342359633Z" level=info msg="CreateContainer within sandbox \"66b0e3dcd22949373723f1305e3caa238b1c62de5ad28f19396b0692f70cefa8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b\"" Jan 23 19:56:13.345190 containerd[1583]: time="2026-01-23T19:56:13.345157362Z" level=info msg="StartContainer for \"d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b\"" Jan 23 19:56:13.351254 containerd[1583]: time="2026-01-23T19:56:13.351216416Z" level=info msg="connecting to shim d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b" address="unix:///run/containerd/s/ef6e78d68f4099c10fd3963713352fef0b4c5071b115031d8e401c1a9dc91b9c" protocol=ttrpc version=3 Jan 23 19:56:13.384040 systemd[1]: Started cri-containerd-d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b.scope - libcontainer container d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b. Jan 23 19:56:13.476999 containerd[1583]: time="2026-01-23T19:56:13.476923656Z" level=info msg="StartContainer for \"d95f85e84684503fc2323cc71e98a85f6baeae5d3ca605387d07fc4b3a20875b\" returns successfully" Jan 23 19:56:14.268919 kubelet[2892]: E0123 19:56:14.268758 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.268919 kubelet[2892]: W0123 19:56:14.268795 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.271883 kubelet[2892]: E0123 19:56:14.271800 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.272490 kubelet[2892]: E0123 19:56:14.272441 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.272490 kubelet[2892]: W0123 19:56:14.272468 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.272490 kubelet[2892]: E0123 19:56:14.272493 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.273098 kubelet[2892]: E0123 19:56:14.273051 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.273098 kubelet[2892]: W0123 19:56:14.273080 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.273285 kubelet[2892]: E0123 19:56:14.273110 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.273593 kubelet[2892]: E0123 19:56:14.273573 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.273754 kubelet[2892]: W0123 19:56:14.273597 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.273754 kubelet[2892]: E0123 19:56:14.273612 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.274085 kubelet[2892]: E0123 19:56:14.274031 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.274085 kubelet[2892]: W0123 19:56:14.274055 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.274209 kubelet[2892]: E0123 19:56:14.274182 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.274459 kubelet[2892]: E0123 19:56:14.274431 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.274459 kubelet[2892]: W0123 19:56:14.274451 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.274556 kubelet[2892]: E0123 19:56:14.274467 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.274815 kubelet[2892]: E0123 19:56:14.274728 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.274815 kubelet[2892]: W0123 19:56:14.274763 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.275019 kubelet[2892]: E0123 19:56:14.274915 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.275265 kubelet[2892]: E0123 19:56:14.275246 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.275265 kubelet[2892]: W0123 19:56:14.275266 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.275265 kubelet[2892]: E0123 19:56:14.275311 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.275648 kubelet[2892]: E0123 19:56:14.275622 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.275648 kubelet[2892]: W0123 19:56:14.275641 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.275746 kubelet[2892]: E0123 19:56:14.275664 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.276067 kubelet[2892]: E0123 19:56:14.276010 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.276067 kubelet[2892]: W0123 19:56:14.276030 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.276067 kubelet[2892]: E0123 19:56:14.276045 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.276456 kubelet[2892]: E0123 19:56:14.276347 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.276540 kubelet[2892]: W0123 19:56:14.276489 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.276540 kubelet[2892]: E0123 19:56:14.276507 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.277005 kubelet[2892]: E0123 19:56:14.276953 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.277005 kubelet[2892]: W0123 19:56:14.276972 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.277005 kubelet[2892]: E0123 19:56:14.276998 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.277259 kubelet[2892]: E0123 19:56:14.277209 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.277259 kubelet[2892]: W0123 19:56:14.277229 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.277259 kubelet[2892]: E0123 19:56:14.277252 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.277691 kubelet[2892]: E0123 19:56:14.277591 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.277691 kubelet[2892]: W0123 19:56:14.277627 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.277691 kubelet[2892]: E0123 19:56:14.277647 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.278028 kubelet[2892]: E0123 19:56:14.277998 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.278028 kubelet[2892]: W0123 19:56:14.278019 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.278126 kubelet[2892]: E0123 19:56:14.278034 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.297649 kubelet[2892]: I0123 19:56:14.296760 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d64b97848-m6stp" podStartSLOduration=2.772632025 podStartE2EDuration="6.296713644s" podCreationTimestamp="2026-01-23 19:56:08 +0000 UTC" firstStartedPulling="2026-01-23 19:56:09.763312373 +0000 UTC m=+28.870882036" lastFinishedPulling="2026-01-23 19:56:13.287393984 +0000 UTC m=+32.394963655" observedRunningTime="2026-01-23 19:56:14.294536717 +0000 UTC m=+33.402106397" watchObservedRunningTime="2026-01-23 19:56:14.296713644 +0000 UTC m=+33.404283331" Jan 23 19:56:14.310460 kubelet[2892]: E0123 19:56:14.310409 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.310769 kubelet[2892]: W0123 19:56:14.310671 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.310769 kubelet[2892]: E0123 19:56:14.310715 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.311485 kubelet[2892]: E0123 19:56:14.311427 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.311485 kubelet[2892]: W0123 19:56:14.311462 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.311816 kubelet[2892]: E0123 19:56:14.311785 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.312279 kubelet[2892]: E0123 19:56:14.312238 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.312612 kubelet[2892]: W0123 19:56:14.312427 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.312612 kubelet[2892]: E0123 19:56:14.312466 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.313210 kubelet[2892]: E0123 19:56:14.313112 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.313210 kubelet[2892]: W0123 19:56:14.313139 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.313566 kubelet[2892]: E0123 19:56:14.313319 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.314155 kubelet[2892]: E0123 19:56:14.313991 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.314155 kubelet[2892]: W0123 19:56:14.314008 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.314155 kubelet[2892]: E0123 19:56:14.314029 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.314781 kubelet[2892]: E0123 19:56:14.314537 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.314781 kubelet[2892]: W0123 19:56:14.314553 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.314781 kubelet[2892]: E0123 19:56:14.314619 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.315239 kubelet[2892]: E0123 19:56:14.315196 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.315239 kubelet[2892]: W0123 19:56:14.315216 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.315716 kubelet[2892]: E0123 19:56:14.315501 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.315899 kubelet[2892]: E0123 19:56:14.315880 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.316054 kubelet[2892]: W0123 19:56:14.315982 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.316177 kubelet[2892]: E0123 19:56:14.316141 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.316470 kubelet[2892]: E0123 19:56:14.316451 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.316614 kubelet[2892]: W0123 19:56:14.316573 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.316929 kubelet[2892]: E0123 19:56:14.316860 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.317117 kubelet[2892]: E0123 19:56:14.317057 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.317246 kubelet[2892]: W0123 19:56:14.317161 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.317437 kubelet[2892]: E0123 19:56:14.317398 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.317755 kubelet[2892]: E0123 19:56:14.317714 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.317755 kubelet[2892]: W0123 19:56:14.317733 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.318024 kubelet[2892]: E0123 19:56:14.318004 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.318404 kubelet[2892]: E0123 19:56:14.318363 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.318404 kubelet[2892]: W0123 19:56:14.318383 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.318684 kubelet[2892]: E0123 19:56:14.318549 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.319085 kubelet[2892]: E0123 19:56:14.319066 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.319915 kubelet[2892]: W0123 19:56:14.319882 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.319996 kubelet[2892]: E0123 19:56:14.319916 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.320455 kubelet[2892]: E0123 19:56:14.320433 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.320455 kubelet[2892]: W0123 19:56:14.320453 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.320604 kubelet[2892]: E0123 19:56:14.320476 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.320966 kubelet[2892]: E0123 19:56:14.320944 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.321046 kubelet[2892]: W0123 19:56:14.320966 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.321046 kubelet[2892]: E0123 19:56:14.320990 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.321279 kubelet[2892]: E0123 19:56:14.321252 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.321338 kubelet[2892]: W0123 19:56:14.321297 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.321338 kubelet[2892]: E0123 19:56:14.321315 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.321600 kubelet[2892]: E0123 19:56:14.321579 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.321600 kubelet[2892]: W0123 19:56:14.321598 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.321730 kubelet[2892]: E0123 19:56:14.321612 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.322150 kubelet[2892]: E0123 19:56:14.322130 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 19:56:14.322150 kubelet[2892]: W0123 19:56:14.322149 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 19:56:14.322306 kubelet[2892]: E0123 19:56:14.322164 2892 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 19:56:14.900077 containerd[1583]: time="2026-01-23T19:56:14.899939018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:14.902055 containerd[1583]: time="2026-01-23T19:56:14.901991355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 19:56:14.902825 containerd[1583]: time="2026-01-23T19:56:14.902748528Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:14.905933 containerd[1583]: time="2026-01-23T19:56:14.905869702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:14.907158 containerd[1583]: time="2026-01-23T19:56:14.906965993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.617710881s" Jan 23 19:56:14.907158 containerd[1583]: time="2026-01-23T19:56:14.907024692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 19:56:14.910829 containerd[1583]: time="2026-01-23T19:56:14.910598078Z" level=info msg="CreateContainer within sandbox \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 19:56:14.970154 containerd[1583]: time="2026-01-23T19:56:14.970004464Z" level=info msg="Container 7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:56:14.999444 containerd[1583]: time="2026-01-23T19:56:14.999348736Z" level=info msg="CreateContainer within sandbox \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f\"" Jan 23 19:56:15.002415 containerd[1583]: time="2026-01-23T19:56:15.002337571Z" level=info msg="StartContainer for \"7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f\"" Jan 23 19:56:15.005509 containerd[1583]: time="2026-01-23T19:56:15.005356039Z" level=info msg="connecting to shim 7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f" address="unix:///run/containerd/s/0e00e3e2b82a580d46f30b8d5a767bdb15ca12da908a14296907894af1791b0c" protocol=ttrpc version=3 Jan 23 19:56:15.051158 systemd[1]: Started cri-containerd-7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f.scope - libcontainer container 7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f. Jan 23 19:56:15.064391 kubelet[2892]: E0123 19:56:15.064316 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:15.211939 systemd[1]: cri-containerd-7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f.scope: Deactivated successfully. Jan 23 19:56:15.232670 containerd[1583]: time="2026-01-23T19:56:15.232422470Z" level=info msg="received container exit event container_id:\"7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f\" id:\"7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f\" pid:3620 exited_at:{seconds:1769198175 nanos:216624521}" Jan 23 19:56:15.235843 containerd[1583]: time="2026-01-23T19:56:15.235751149Z" level=info msg="StartContainer for \"7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f\" returns successfully" Jan 23 19:56:15.269047 kubelet[2892]: I0123 19:56:15.268997 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 19:56:15.287089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7f2a967ac28ff0cd357d7f24af89d060f13e92c4bf8572ffdff5eb64c254845f-rootfs.mount: Deactivated successfully. Jan 23 19:56:16.279473 containerd[1583]: time="2026-01-23T19:56:16.279386964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 19:56:17.063939 kubelet[2892]: E0123 19:56:17.063258 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:19.063996 kubelet[2892]: E0123 19:56:19.063916 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:21.064042 kubelet[2892]: E0123 19:56:21.063957 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:21.111282 containerd[1583]: time="2026-01-23T19:56:21.111212807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:21.114844 containerd[1583]: time="2026-01-23T19:56:21.114761934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 19:56:21.116208 containerd[1583]: time="2026-01-23T19:56:21.116141905Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:21.119278 containerd[1583]: time="2026-01-23T19:56:21.119217163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:21.120584 containerd[1583]: time="2026-01-23T19:56:21.120273818Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.840806906s" Jan 23 19:56:21.120584 containerd[1583]: time="2026-01-23T19:56:21.120316341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 19:56:21.124755 containerd[1583]: time="2026-01-23T19:56:21.124709212Z" level=info msg="CreateContainer within sandbox \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 19:56:21.140840 containerd[1583]: time="2026-01-23T19:56:21.139155046Z" level=info msg="Container 9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:56:21.146723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1987358181.mount: Deactivated successfully. Jan 23 19:56:21.171374 containerd[1583]: time="2026-01-23T19:56:21.170763261Z" level=info msg="CreateContainer within sandbox \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814\"" Jan 23 19:56:21.172825 containerd[1583]: time="2026-01-23T19:56:21.172744876Z" level=info msg="StartContainer for \"9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814\"" Jan 23 19:56:21.175582 containerd[1583]: time="2026-01-23T19:56:21.175479845Z" level=info msg="connecting to shim 9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814" address="unix:///run/containerd/s/0e00e3e2b82a580d46f30b8d5a767bdb15ca12da908a14296907894af1791b0c" protocol=ttrpc version=3 Jan 23 19:56:21.209138 systemd[1]: Started cri-containerd-9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814.scope - libcontainer container 9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814. Jan 23 19:56:21.339143 containerd[1583]: time="2026-01-23T19:56:21.338915392Z" level=info msg="StartContainer for \"9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814\" returns successfully" Jan 23 19:56:22.487316 systemd[1]: cri-containerd-9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814.scope: Deactivated successfully. Jan 23 19:56:22.487834 systemd[1]: cri-containerd-9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814.scope: Consumed 800ms CPU time, 170.9M memory peak, 6.6M read from disk, 171.3M written to disk. Jan 23 19:56:22.554126 containerd[1583]: time="2026-01-23T19:56:22.554029261Z" level=info msg="received container exit event container_id:\"9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814\" id:\"9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814\" pid:3682 exited_at:{seconds:1769198182 nanos:553439415}" Jan 23 19:56:22.557151 kubelet[2892]: I0123 19:56:22.557081 2892 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 19:56:22.686593 kubelet[2892]: I0123 19:56:22.686224 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hknq5\" (UniqueName: \"kubernetes.io/projected/326ff855-4972-4e46-8b78-1902cd53ddd3-kube-api-access-hknq5\") pod \"coredns-668d6bf9bc-kd6b5\" (UID: \"326ff855-4972-4e46-8b78-1902cd53ddd3\") " pod="kube-system/coredns-668d6bf9bc-kd6b5" Jan 23 19:56:22.686593 kubelet[2892]: I0123 19:56:22.686280 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/326ff855-4972-4e46-8b78-1902cd53ddd3-config-volume\") pod \"coredns-668d6bf9bc-kd6b5\" (UID: \"326ff855-4972-4e46-8b78-1902cd53ddd3\") " pod="kube-system/coredns-668d6bf9bc-kd6b5" Jan 23 19:56:22.689856 systemd[1]: Created slice kubepods-burstable-pod326ff855_4972_4e46_8b78_1902cd53ddd3.slice - libcontainer container kubepods-burstable-pod326ff855_4972_4e46_8b78_1902cd53ddd3.slice. Jan 23 19:56:22.710854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9da63e4e5de00ebe46b3473fa6dec69639970105e21756e812d7e790678bc814-rootfs.mount: Deactivated successfully. Jan 23 19:56:22.734780 systemd[1]: Created slice kubepods-burstable-podeb24c3b7_92a2_4394_bf0d_a2b1ca00f1f0.slice - libcontainer container kubepods-burstable-podeb24c3b7_92a2_4394_bf0d_a2b1ca00f1f0.slice. Jan 23 19:56:22.766068 systemd[1]: Created slice kubepods-besteffort-podb9e2459e_4d22_438e_9c19_f8662b6a9620.slice - libcontainer container kubepods-besteffort-podb9e2459e_4d22_438e_9c19_f8662b6a9620.slice. Jan 23 19:56:22.776123 systemd[1]: Created slice kubepods-besteffort-podef88c33e_7bcc_4e40_8e39_a5221bbcac5a.slice - libcontainer container kubepods-besteffort-podef88c33e_7bcc_4e40_8e39_a5221bbcac5a.slice. Jan 23 19:56:22.788303 kubelet[2892]: I0123 19:56:22.786735 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-backend-key-pair\") pod \"whisker-6dd9b8585b-4mjff\" (UID: \"a546b487-6fae-4b8e-972b-2d2526b18e04\") " pod="calico-system/whisker-6dd9b8585b-4mjff" Jan 23 19:56:22.788303 kubelet[2892]: I0123 19:56:22.786799 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gl6b\" (UniqueName: \"kubernetes.io/projected/eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0-kube-api-access-6gl6b\") pod \"coredns-668d6bf9bc-6brdf\" (UID: \"eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0\") " pod="kube-system/coredns-668d6bf9bc-6brdf" Jan 23 19:56:22.788303 kubelet[2892]: I0123 19:56:22.786889 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef88c33e-7bcc-4e40-8e39-a5221bbcac5a-tigera-ca-bundle\") pod \"calico-kube-controllers-66675fd984-tkh5k\" (UID: \"ef88c33e-7bcc-4e40-8e39-a5221bbcac5a\") " pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" Jan 23 19:56:22.788303 kubelet[2892]: I0123 19:56:22.786944 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e1c1444-00fb-4816-822a-67edc8d93d18-calico-apiserver-certs\") pod \"calico-apiserver-7fd877769c-tz6z7\" (UID: \"4e1c1444-00fb-4816-822a-67edc8d93d18\") " pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" Jan 23 19:56:22.788303 kubelet[2892]: I0123 19:56:22.786972 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkxlm\" (UniqueName: \"kubernetes.io/projected/4e1c1444-00fb-4816-822a-67edc8d93d18-kube-api-access-qkxlm\") pod \"calico-apiserver-7fd877769c-tz6z7\" (UID: \"4e1c1444-00fb-4816-822a-67edc8d93d18\") " pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" Jan 23 19:56:22.788718 kubelet[2892]: I0123 19:56:22.787034 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-ca-bundle\") pod \"whisker-6dd9b8585b-4mjff\" (UID: \"a546b487-6fae-4b8e-972b-2d2526b18e04\") " pod="calico-system/whisker-6dd9b8585b-4mjff" Jan 23 19:56:22.788718 kubelet[2892]: I0123 19:56:22.787083 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq2jm\" (UniqueName: \"kubernetes.io/projected/ccd58232-7772-4e2c-865f-5e90b11eb5bb-kube-api-access-qq2jm\") pod \"calico-apiserver-7fd877769c-spxf6\" (UID: \"ccd58232-7772-4e2c-865f-5e90b11eb5bb\") " pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" Jan 23 19:56:22.788718 kubelet[2892]: I0123 19:56:22.787157 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bttl\" (UniqueName: \"kubernetes.io/projected/b9e2459e-4d22-438e-9c19-f8662b6a9620-kube-api-access-9bttl\") pod \"goldmane-666569f655-p9x6m\" (UID: \"b9e2459e-4d22-438e-9c19-f8662b6a9620\") " pod="calico-system/goldmane-666569f655-p9x6m" Jan 23 19:56:22.788718 kubelet[2892]: I0123 19:56:22.787215 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0-config-volume\") pod \"coredns-668d6bf9bc-6brdf\" (UID: \"eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0\") " pod="kube-system/coredns-668d6bf9bc-6brdf" Jan 23 19:56:22.788718 kubelet[2892]: I0123 19:56:22.787246 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbhxv\" (UniqueName: \"kubernetes.io/projected/ef88c33e-7bcc-4e40-8e39-a5221bbcac5a-kube-api-access-jbhxv\") pod \"calico-kube-controllers-66675fd984-tkh5k\" (UID: \"ef88c33e-7bcc-4e40-8e39-a5221bbcac5a\") " pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" Jan 23 19:56:22.791489 kubelet[2892]: I0123 19:56:22.787327 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ccd58232-7772-4e2c-865f-5e90b11eb5bb-calico-apiserver-certs\") pod \"calico-apiserver-7fd877769c-spxf6\" (UID: \"ccd58232-7772-4e2c-865f-5e90b11eb5bb\") " pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" Jan 23 19:56:22.791489 kubelet[2892]: I0123 19:56:22.787426 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkcf\" (UniqueName: \"kubernetes.io/projected/a546b487-6fae-4b8e-972b-2d2526b18e04-kube-api-access-mmkcf\") pod \"whisker-6dd9b8585b-4mjff\" (UID: \"a546b487-6fae-4b8e-972b-2d2526b18e04\") " pod="calico-system/whisker-6dd9b8585b-4mjff" Jan 23 19:56:22.791489 kubelet[2892]: I0123 19:56:22.787496 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9e2459e-4d22-438e-9c19-f8662b6a9620-config\") pod \"goldmane-666569f655-p9x6m\" (UID: \"b9e2459e-4d22-438e-9c19-f8662b6a9620\") " pod="calico-system/goldmane-666569f655-p9x6m" Jan 23 19:56:22.791489 kubelet[2892]: I0123 19:56:22.787629 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9e2459e-4d22-438e-9c19-f8662b6a9620-goldmane-ca-bundle\") pod \"goldmane-666569f655-p9x6m\" (UID: \"b9e2459e-4d22-438e-9c19-f8662b6a9620\") " pod="calico-system/goldmane-666569f655-p9x6m" Jan 23 19:56:22.791489 kubelet[2892]: I0123 19:56:22.787969 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b9e2459e-4d22-438e-9c19-f8662b6a9620-goldmane-key-pair\") pod \"goldmane-666569f655-p9x6m\" (UID: \"b9e2459e-4d22-438e-9c19-f8662b6a9620\") " pod="calico-system/goldmane-666569f655-p9x6m" Jan 23 19:56:22.795606 systemd[1]: Created slice kubepods-besteffort-podccd58232_7772_4e2c_865f_5e90b11eb5bb.slice - libcontainer container kubepods-besteffort-podccd58232_7772_4e2c_865f_5e90b11eb5bb.slice. Jan 23 19:56:22.811863 systemd[1]: Created slice kubepods-besteffort-pod4e1c1444_00fb_4816_822a_67edc8d93d18.slice - libcontainer container kubepods-besteffort-pod4e1c1444_00fb_4816_822a_67edc8d93d18.slice. Jan 23 19:56:22.829478 systemd[1]: Created slice kubepods-besteffort-poda546b487_6fae_4b8e_972b_2d2526b18e04.slice - libcontainer container kubepods-besteffort-poda546b487_6fae_4b8e_972b_2d2526b18e04.slice. Jan 23 19:56:23.014388 containerd[1583]: time="2026-01-23T19:56:23.014322869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kd6b5,Uid:326ff855-4972-4e46-8b78-1902cd53ddd3,Namespace:kube-system,Attempt:0,}" Jan 23 19:56:23.068738 containerd[1583]: time="2026-01-23T19:56:23.068571859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6brdf,Uid:eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0,Namespace:kube-system,Attempt:0,}" Jan 23 19:56:23.082228 systemd[1]: Created slice kubepods-besteffort-pod981744d6_418c_41e4_8d22_4fb530fbf1db.slice - libcontainer container kubepods-besteffort-pod981744d6_418c_41e4_8d22_4fb530fbf1db.slice. Jan 23 19:56:23.093445 containerd[1583]: time="2026-01-23T19:56:23.093305800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gplb,Uid:981744d6-418c-41e4-8d22-4fb530fbf1db,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:23.110181 containerd[1583]: time="2026-01-23T19:56:23.109483251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p9x6m,Uid:b9e2459e-4d22-438e-9c19-f8662b6a9620,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:23.110819 containerd[1583]: time="2026-01-23T19:56:23.110774977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66675fd984-tkh5k,Uid:ef88c33e-7bcc-4e40-8e39-a5221bbcac5a,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:23.124957 containerd[1583]: time="2026-01-23T19:56:23.124853394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-spxf6,Uid:ccd58232-7772-4e2c-865f-5e90b11eb5bb,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:56:23.129018 containerd[1583]: time="2026-01-23T19:56:23.128964291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-tz6z7,Uid:4e1c1444-00fb-4816-822a-67edc8d93d18,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:56:23.141456 containerd[1583]: time="2026-01-23T19:56:23.139794437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd9b8585b-4mjff,Uid:a546b487-6fae-4b8e-972b-2d2526b18e04,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:23.404491 containerd[1583]: time="2026-01-23T19:56:23.402738538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 19:56:23.513841 containerd[1583]: time="2026-01-23T19:56:23.513599693Z" level=error msg="Failed to destroy network for sandbox \"b80e3d58125dd1b2a8872b88015f9246739a32755f619f73bc7c5b0c39b86001\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.532873 containerd[1583]: time="2026-01-23T19:56:23.516417009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kd6b5,Uid:326ff855-4972-4e46-8b78-1902cd53ddd3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80e3d58125dd1b2a8872b88015f9246739a32755f619f73bc7c5b0c39b86001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.537827 kubelet[2892]: E0123 19:56:23.536601 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80e3d58125dd1b2a8872b88015f9246739a32755f619f73bc7c5b0c39b86001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.537827 kubelet[2892]: E0123 19:56:23.536719 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80e3d58125dd1b2a8872b88015f9246739a32755f619f73bc7c5b0c39b86001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kd6b5" Jan 23 19:56:23.537827 kubelet[2892]: E0123 19:56:23.536774 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80e3d58125dd1b2a8872b88015f9246739a32755f619f73bc7c5b0c39b86001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kd6b5" Jan 23 19:56:23.538480 kubelet[2892]: E0123 19:56:23.538334 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kd6b5_kube-system(326ff855-4972-4e46-8b78-1902cd53ddd3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kd6b5_kube-system(326ff855-4972-4e46-8b78-1902cd53ddd3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b80e3d58125dd1b2a8872b88015f9246739a32755f619f73bc7c5b0c39b86001\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kd6b5" podUID="326ff855-4972-4e46-8b78-1902cd53ddd3" Jan 23 19:56:23.548853 containerd[1583]: time="2026-01-23T19:56:23.548779980Z" level=error msg="Failed to destroy network for sandbox \"256dbddba4242ae2292f270ccd45fb2171d8319937c230b8e0631b5c30bf8c0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.552432 containerd[1583]: time="2026-01-23T19:56:23.552379370Z" level=error msg="Failed to destroy network for sandbox \"2494f3ca3c7292f3b4159f3331feba7460721a06ccf71243335636952180a9d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.553227 containerd[1583]: time="2026-01-23T19:56:23.553184401Z" level=error msg="Failed to destroy network for sandbox \"f01b72711a23d86deed2a4db0e51f494e001940b6004085200de49fa86bde261\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.554867 containerd[1583]: time="2026-01-23T19:56:23.554773869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66675fd984-tkh5k,Uid:ef88c33e-7bcc-4e40-8e39-a5221bbcac5a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"256dbddba4242ae2292f270ccd45fb2171d8319937c230b8e0631b5c30bf8c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.556109 containerd[1583]: time="2026-01-23T19:56:23.556050172Z" level=error msg="Failed to destroy network for sandbox \"178e95bfbdbbb38fec2098a56aa2280810d899739f67c31ab5929ebb49a1283b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.557294 kubelet[2892]: E0123 19:56:23.556970 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256dbddba4242ae2292f270ccd45fb2171d8319937c230b8e0631b5c30bf8c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.557294 kubelet[2892]: E0123 19:56:23.557092 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256dbddba4242ae2292f270ccd45fb2171d8319937c230b8e0631b5c30bf8c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" Jan 23 19:56:23.557294 kubelet[2892]: E0123 19:56:23.557127 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256dbddba4242ae2292f270ccd45fb2171d8319937c230b8e0631b5c30bf8c0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" Jan 23 19:56:23.558208 kubelet[2892]: E0123 19:56:23.557186 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"256dbddba4242ae2292f270ccd45fb2171d8319937c230b8e0631b5c30bf8c0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:56:23.560384 containerd[1583]: time="2026-01-23T19:56:23.558037942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p9x6m,Uid:b9e2459e-4d22-438e-9c19-f8662b6a9620,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2494f3ca3c7292f3b4159f3331feba7460721a06ccf71243335636952180a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.561276 kubelet[2892]: E0123 19:56:23.561175 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2494f3ca3c7292f3b4159f3331feba7460721a06ccf71243335636952180a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.561355 kubelet[2892]: E0123 19:56:23.561320 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2494f3ca3c7292f3b4159f3331feba7460721a06ccf71243335636952180a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p9x6m" Jan 23 19:56:23.561410 kubelet[2892]: E0123 19:56:23.561383 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2494f3ca3c7292f3b4159f3331feba7460721a06ccf71243335636952180a9d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-p9x6m" Jan 23 19:56:23.561538 kubelet[2892]: E0123 19:56:23.561487 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-p9x6m_calico-system(b9e2459e-4d22-438e-9c19-f8662b6a9620)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-p9x6m_calico-system(b9e2459e-4d22-438e-9c19-f8662b6a9620)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2494f3ca3c7292f3b4159f3331feba7460721a06ccf71243335636952180a9d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:56:23.562095 containerd[1583]: time="2026-01-23T19:56:23.562038075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gplb,Uid:981744d6-418c-41e4-8d22-4fb530fbf1db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01b72711a23d86deed2a4db0e51f494e001940b6004085200de49fa86bde261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.562243 kubelet[2892]: E0123 19:56:23.562205 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01b72711a23d86deed2a4db0e51f494e001940b6004085200de49fa86bde261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.562598 kubelet[2892]: E0123 19:56:23.562254 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01b72711a23d86deed2a4db0e51f494e001940b6004085200de49fa86bde261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:23.562598 kubelet[2892]: E0123 19:56:23.562299 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f01b72711a23d86deed2a4db0e51f494e001940b6004085200de49fa86bde261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:23.562598 kubelet[2892]: E0123 19:56:23.562340 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f01b72711a23d86deed2a4db0e51f494e001940b6004085200de49fa86bde261\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:23.566284 containerd[1583]: time="2026-01-23T19:56:23.566227836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-tz6z7,Uid:4e1c1444-00fb-4816-822a-67edc8d93d18,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"178e95bfbdbbb38fec2098a56aa2280810d899739f67c31ab5929ebb49a1283b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.567049 kubelet[2892]: E0123 19:56:23.566753 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178e95bfbdbbb38fec2098a56aa2280810d899739f67c31ab5929ebb49a1283b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.567049 kubelet[2892]: E0123 19:56:23.566934 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178e95bfbdbbb38fec2098a56aa2280810d899739f67c31ab5929ebb49a1283b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" Jan 23 19:56:23.567049 kubelet[2892]: E0123 19:56:23.566977 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"178e95bfbdbbb38fec2098a56aa2280810d899739f67c31ab5929ebb49a1283b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" Jan 23 19:56:23.567278 kubelet[2892]: E0123 19:56:23.567045 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd877769c-tz6z7_calico-apiserver(4e1c1444-00fb-4816-822a-67edc8d93d18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd877769c-tz6z7_calico-apiserver(4e1c1444-00fb-4816-822a-67edc8d93d18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"178e95bfbdbbb38fec2098a56aa2280810d899739f67c31ab5929ebb49a1283b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:56:23.576341 containerd[1583]: time="2026-01-23T19:56:23.576289262Z" level=error msg="Failed to destroy network for sandbox \"81e3a9412bcc18d409c631ff1f0d4b08f375c6afc9f238526d62cb5b374fcc4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.579464 containerd[1583]: time="2026-01-23T19:56:23.579392247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6brdf,Uid:eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e3a9412bcc18d409c631ff1f0d4b08f375c6afc9f238526d62cb5b374fcc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.580886 kubelet[2892]: E0123 19:56:23.580005 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e3a9412bcc18d409c631ff1f0d4b08f375c6afc9f238526d62cb5b374fcc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.580886 kubelet[2892]: E0123 19:56:23.580104 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e3a9412bcc18d409c631ff1f0d4b08f375c6afc9f238526d62cb5b374fcc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6brdf" Jan 23 19:56:23.580886 kubelet[2892]: E0123 19:56:23.580139 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81e3a9412bcc18d409c631ff1f0d4b08f375c6afc9f238526d62cb5b374fcc4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6brdf" Jan 23 19:56:23.582028 kubelet[2892]: E0123 19:56:23.581193 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6brdf_kube-system(eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6brdf_kube-system(eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81e3a9412bcc18d409c631ff1f0d4b08f375c6afc9f238526d62cb5b374fcc4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6brdf" podUID="eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0" Jan 23 19:56:23.597591 containerd[1583]: time="2026-01-23T19:56:23.597519755Z" level=error msg="Failed to destroy network for sandbox \"12485904d411f8345132ad85bb66ca774d64cf99b2cf6dca933e6311bc3b0005\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.599245 containerd[1583]: time="2026-01-23T19:56:23.599164347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6dd9b8585b-4mjff,Uid:a546b487-6fae-4b8e-972b-2d2526b18e04,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12485904d411f8345132ad85bb66ca774d64cf99b2cf6dca933e6311bc3b0005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.600226 kubelet[2892]: E0123 19:56:23.600166 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12485904d411f8345132ad85bb66ca774d64cf99b2cf6dca933e6311bc3b0005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.600321 kubelet[2892]: E0123 19:56:23.600271 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12485904d411f8345132ad85bb66ca774d64cf99b2cf6dca933e6311bc3b0005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd9b8585b-4mjff" Jan 23 19:56:23.600406 kubelet[2892]: E0123 19:56:23.600331 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12485904d411f8345132ad85bb66ca774d64cf99b2cf6dca933e6311bc3b0005\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6dd9b8585b-4mjff" Jan 23 19:56:23.600475 kubelet[2892]: E0123 19:56:23.600421 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6dd9b8585b-4mjff_calico-system(a546b487-6fae-4b8e-972b-2d2526b18e04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6dd9b8585b-4mjff_calico-system(a546b487-6fae-4b8e-972b-2d2526b18e04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12485904d411f8345132ad85bb66ca774d64cf99b2cf6dca933e6311bc3b0005\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6dd9b8585b-4mjff" podUID="a546b487-6fae-4b8e-972b-2d2526b18e04" Jan 23 19:56:23.605180 containerd[1583]: time="2026-01-23T19:56:23.605060050Z" level=error msg="Failed to destroy network for sandbox \"97106e76ce4e0f7bb78f60a8cf2f304ff3e1d5dc1ed68b2d68a0f3380b6a3830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.606379 containerd[1583]: time="2026-01-23T19:56:23.606299185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-spxf6,Uid:ccd58232-7772-4e2c-865f-5e90b11eb5bb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97106e76ce4e0f7bb78f60a8cf2f304ff3e1d5dc1ed68b2d68a0f3380b6a3830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.606961 kubelet[2892]: E0123 19:56:23.606659 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97106e76ce4e0f7bb78f60a8cf2f304ff3e1d5dc1ed68b2d68a0f3380b6a3830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:23.606961 kubelet[2892]: E0123 19:56:23.606735 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97106e76ce4e0f7bb78f60a8cf2f304ff3e1d5dc1ed68b2d68a0f3380b6a3830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" Jan 23 19:56:23.606961 kubelet[2892]: E0123 19:56:23.606768 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97106e76ce4e0f7bb78f60a8cf2f304ff3e1d5dc1ed68b2d68a0f3380b6a3830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" Jan 23 19:56:23.607139 kubelet[2892]: E0123 19:56:23.606870 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fd877769c-spxf6_calico-apiserver(ccd58232-7772-4e2c-865f-5e90b11eb5bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fd877769c-spxf6_calico-apiserver(ccd58232-7772-4e2c-865f-5e90b11eb5bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97106e76ce4e0f7bb78f60a8cf2f304ff3e1d5dc1ed68b2d68a0f3380b6a3830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:56:28.188918 kubelet[2892]: I0123 19:56:28.188685 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 19:56:33.715586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2625508818.mount: Deactivated successfully. Jan 23 19:56:33.792356 containerd[1583]: time="2026-01-23T19:56:33.786289821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 19:56:33.792356 containerd[1583]: time="2026-01-23T19:56:33.783972155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:33.795650 containerd[1583]: time="2026-01-23T19:56:33.795578209Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:33.797098 containerd[1583]: time="2026-01-23T19:56:33.797060121Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 19:56:33.804070 containerd[1583]: time="2026-01-23T19:56:33.803928102Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.39612286s" Jan 23 19:56:33.804070 containerd[1583]: time="2026-01-23T19:56:33.803979631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 19:56:33.853930 containerd[1583]: time="2026-01-23T19:56:33.853274792Z" level=info msg="CreateContainer within sandbox \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 19:56:33.917529 containerd[1583]: time="2026-01-23T19:56:33.917443552Z" level=info msg="Container bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:56:33.923709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899126855.mount: Deactivated successfully. Jan 23 19:56:33.954223 containerd[1583]: time="2026-01-23T19:56:33.954074098Z" level=info msg="CreateContainer within sandbox \"0a4c3c07c637635ede8d77ffed961405b560400cc0583fd2a787d7564b59a811\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65\"" Jan 23 19:56:33.956081 containerd[1583]: time="2026-01-23T19:56:33.955794155Z" level=info msg="StartContainer for \"bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65\"" Jan 23 19:56:33.962833 containerd[1583]: time="2026-01-23T19:56:33.962767473Z" level=info msg="connecting to shim bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65" address="unix:///run/containerd/s/0e00e3e2b82a580d46f30b8d5a767bdb15ca12da908a14296907894af1791b0c" protocol=ttrpc version=3 Jan 23 19:56:34.073914 containerd[1583]: time="2026-01-23T19:56:34.064166228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66675fd984-tkh5k,Uid:ef88c33e-7bcc-4e40-8e39-a5221bbcac5a,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:34.073914 containerd[1583]: time="2026-01-23T19:56:34.064625514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gplb,Uid:981744d6-418c-41e4-8d22-4fb530fbf1db,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:34.145147 systemd[1]: Started cri-containerd-bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65.scope - libcontainer container bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65. Jan 23 19:56:34.331471 containerd[1583]: time="2026-01-23T19:56:34.331042378Z" level=error msg="Failed to destroy network for sandbox \"6960a4f07296ed7ca04e585bd7c7e5756c6f2cb056d4cd5ad71c905d2dc6a9c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:34.335473 containerd[1583]: time="2026-01-23T19:56:34.335380791Z" level=error msg="Failed to destroy network for sandbox \"2f8bedaa5ce5cc7b4864c899bbcf552a9ece5a968fdb2a35ae7b51437684b739\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:34.337224 containerd[1583]: time="2026-01-23T19:56:34.337161544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66675fd984-tkh5k,Uid:ef88c33e-7bcc-4e40-8e39-a5221bbcac5a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6960a4f07296ed7ca04e585bd7c7e5756c6f2cb056d4cd5ad71c905d2dc6a9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:34.338426 kubelet[2892]: E0123 19:56:34.337656 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6960a4f07296ed7ca04e585bd7c7e5756c6f2cb056d4cd5ad71c905d2dc6a9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:34.338426 kubelet[2892]: E0123 19:56:34.337912 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6960a4f07296ed7ca04e585bd7c7e5756c6f2cb056d4cd5ad71c905d2dc6a9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" Jan 23 19:56:34.338426 kubelet[2892]: E0123 19:56:34.337972 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6960a4f07296ed7ca04e585bd7c7e5756c6f2cb056d4cd5ad71c905d2dc6a9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" Jan 23 19:56:34.339143 kubelet[2892]: E0123 19:56:34.338066 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6960a4f07296ed7ca04e585bd7c7e5756c6f2cb056d4cd5ad71c905d2dc6a9c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:56:34.340257 containerd[1583]: time="2026-01-23T19:56:34.340216872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gplb,Uid:981744d6-418c-41e4-8d22-4fb530fbf1db,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f8bedaa5ce5cc7b4864c899bbcf552a9ece5a968fdb2a35ae7b51437684b739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:34.340741 kubelet[2892]: E0123 19:56:34.340592 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f8bedaa5ce5cc7b4864c899bbcf552a9ece5a968fdb2a35ae7b51437684b739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 19:56:34.340944 kubelet[2892]: E0123 19:56:34.340863 2892 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f8bedaa5ce5cc7b4864c899bbcf552a9ece5a968fdb2a35ae7b51437684b739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:34.341128 kubelet[2892]: E0123 19:56:34.341022 2892 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f8bedaa5ce5cc7b4864c899bbcf552a9ece5a968fdb2a35ae7b51437684b739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4gplb" Jan 23 19:56:34.341318 kubelet[2892]: E0123 19:56:34.341213 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f8bedaa5ce5cc7b4864c899bbcf552a9ece5a968fdb2a35ae7b51437684b739\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:34.362710 containerd[1583]: time="2026-01-23T19:56:34.362599921Z" level=info msg="StartContainer for \"bc4fd09ee238d726537bdfd5448b409160cc0e039d9757cb4966c15bdfcd6c65\" returns successfully" Jan 23 19:56:34.482480 kubelet[2892]: I0123 19:56:34.482125 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-s99m8" podStartSLOduration=1.459580723 podStartE2EDuration="25.482060998s" podCreationTimestamp="2026-01-23 19:56:09 +0000 UTC" firstStartedPulling="2026-01-23 19:56:09.784678983 +0000 UTC m=+28.892248653" lastFinishedPulling="2026-01-23 19:56:33.807159263 +0000 UTC m=+52.914728928" observedRunningTime="2026-01-23 19:56:34.481411234 +0000 UTC m=+53.588980955" watchObservedRunningTime="2026-01-23 19:56:34.482060998 +0000 UTC m=+53.589630684" Jan 23 19:56:34.687438 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 19:56:34.691143 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 19:56:34.995525 kubelet[2892]: I0123 19:56:34.995354 2892 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkcf\" (UniqueName: \"kubernetes.io/projected/a546b487-6fae-4b8e-972b-2d2526b18e04-kube-api-access-mmkcf\") pod \"a546b487-6fae-4b8e-972b-2d2526b18e04\" (UID: \"a546b487-6fae-4b8e-972b-2d2526b18e04\") " Jan 23 19:56:34.996188 kubelet[2892]: I0123 19:56:34.996161 2892 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-backend-key-pair\") pod \"a546b487-6fae-4b8e-972b-2d2526b18e04\" (UID: \"a546b487-6fae-4b8e-972b-2d2526b18e04\") " Jan 23 19:56:34.996419 kubelet[2892]: I0123 19:56:34.996396 2892 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-ca-bundle\") pod \"a546b487-6fae-4b8e-972b-2d2526b18e04\" (UID: \"a546b487-6fae-4b8e-972b-2d2526b18e04\") " Jan 23 19:56:35.001744 kubelet[2892]: I0123 19:56:35.000993 2892 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a546b487-6fae-4b8e-972b-2d2526b18e04" (UID: "a546b487-6fae-4b8e-972b-2d2526b18e04"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 19:56:35.009869 kubelet[2892]: I0123 19:56:35.009420 2892 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a546b487-6fae-4b8e-972b-2d2526b18e04-kube-api-access-mmkcf" (OuterVolumeSpecName: "kube-api-access-mmkcf") pod "a546b487-6fae-4b8e-972b-2d2526b18e04" (UID: "a546b487-6fae-4b8e-972b-2d2526b18e04"). InnerVolumeSpecName "kube-api-access-mmkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 19:56:35.010482 systemd[1]: var-lib-kubelet-pods-a546b487\x2d6fae\x2d4b8e\x2d972b\x2d2d2526b18e04-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmkcf.mount: Deactivated successfully. Jan 23 19:56:35.015729 kubelet[2892]: I0123 19:56:35.015043 2892 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a546b487-6fae-4b8e-972b-2d2526b18e04" (UID: "a546b487-6fae-4b8e-972b-2d2526b18e04"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 19:56:35.017593 systemd[1]: var-lib-kubelet-pods-a546b487\x2d6fae\x2d4b8e\x2d972b\x2d2d2526b18e04-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 19:56:35.066381 containerd[1583]: time="2026-01-23T19:56:35.066304050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-tz6z7,Uid:4e1c1444-00fb-4816-822a-67edc8d93d18,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:56:35.070969 containerd[1583]: time="2026-01-23T19:56:35.070932320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p9x6m,Uid:b9e2459e-4d22-438e-9c19-f8662b6a9620,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:35.071282 containerd[1583]: time="2026-01-23T19:56:35.071233103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6brdf,Uid:eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0,Namespace:kube-system,Attempt:0,}" Jan 23 19:56:35.101132 kubelet[2892]: I0123 19:56:35.101065 2892 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-ca-bundle\") on node \"srv-hs5p8.gb1.brightbox.com\" DevicePath \"\"" Jan 23 19:56:35.101132 kubelet[2892]: I0123 19:56:35.101128 2892 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mmkcf\" (UniqueName: \"kubernetes.io/projected/a546b487-6fae-4b8e-972b-2d2526b18e04-kube-api-access-mmkcf\") on node \"srv-hs5p8.gb1.brightbox.com\" DevicePath \"\"" Jan 23 19:56:35.102241 kubelet[2892]: I0123 19:56:35.101148 2892 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a546b487-6fae-4b8e-972b-2d2526b18e04-whisker-backend-key-pair\") on node \"srv-hs5p8.gb1.brightbox.com\" DevicePath \"\"" Jan 23 19:56:35.119709 systemd[1]: Removed slice kubepods-besteffort-poda546b487_6fae_4b8e_972b_2d2526b18e04.slice - libcontainer container kubepods-besteffort-poda546b487_6fae_4b8e_972b_2d2526b18e04.slice. Jan 23 19:56:35.452092 kubelet[2892]: I0123 19:56:35.452046 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 19:56:35.605002 systemd[1]: Created slice kubepods-besteffort-podab667679_fb0a_4ab2_a144_1015741c2ce8.slice - libcontainer container kubepods-besteffort-podab667679_fb0a_4ab2_a144_1015741c2ce8.slice. Jan 23 19:56:35.706595 kubelet[2892]: I0123 19:56:35.705960 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh5vt\" (UniqueName: \"kubernetes.io/projected/ab667679-fb0a-4ab2-a144-1015741c2ce8-kube-api-access-wh5vt\") pod \"whisker-8c4bc59d-89rwz\" (UID: \"ab667679-fb0a-4ab2-a144-1015741c2ce8\") " pod="calico-system/whisker-8c4bc59d-89rwz" Jan 23 19:56:35.706595 kubelet[2892]: I0123 19:56:35.706025 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab667679-fb0a-4ab2-a144-1015741c2ce8-whisker-backend-key-pair\") pod \"whisker-8c4bc59d-89rwz\" (UID: \"ab667679-fb0a-4ab2-a144-1015741c2ce8\") " pod="calico-system/whisker-8c4bc59d-89rwz" Jan 23 19:56:35.706595 kubelet[2892]: I0123 19:56:35.706071 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab667679-fb0a-4ab2-a144-1015741c2ce8-whisker-ca-bundle\") pod \"whisker-8c4bc59d-89rwz\" (UID: \"ab667679-fb0a-4ab2-a144-1015741c2ce8\") " pod="calico-system/whisker-8c4bc59d-89rwz" Jan 23 19:56:35.803169 systemd-networkd[1500]: calif90fd35265b: Link UP Jan 23 19:56:35.804102 systemd-networkd[1500]: calif90fd35265b: Gained carrier Jan 23 19:56:35.887972 containerd[1583]: 2026-01-23 19:56:35.235 [INFO][4047] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:56:35.887972 containerd[1583]: 2026-01-23 19:56:35.292 [INFO][4047] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0 calico-apiserver-7fd877769c- calico-apiserver 4e1c1444-00fb-4816-822a-67edc8d93d18 852 0 2026-01-23 19:56:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd877769c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com calico-apiserver-7fd877769c-tz6z7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif90fd35265b [] [] }} ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-" Jan 23 19:56:35.887972 containerd[1583]: 2026-01-23 19:56:35.295 [INFO][4047] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.887972 containerd[1583]: 2026-01-23 19:56:35.554 [INFO][4090] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" HandleID="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.556 [INFO][4090] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" HandleID="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"calico-apiserver-7fd877769c-tz6z7", "timestamp":"2026-01-23 19:56:35.554266814 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.556 [INFO][4090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.556 [INFO][4090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.559 [INFO][4090] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.593 [INFO][4090] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.629 [INFO][4090] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.642 [INFO][4090] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.646 [INFO][4090] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889120 containerd[1583]: 2026-01-23 19:56:35.652 [INFO][4090] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889990 containerd[1583]: 2026-01-23 19:56:35.658 [INFO][4090] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889990 containerd[1583]: 2026-01-23 19:56:35.663 [INFO][4090] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.889990 containerd[1583]: 2026-01-23 19:56:35.663 [INFO][4090] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-hs5p8.gb1.brightbox.com" subnet=192.168.10.128/26 Jan 23 19:56:35.889990 containerd[1583]: 2026-01-23 19:56:35.675 [INFO][4090] ipam/ipam_block_reader_writer.go 231: The block already exists, getting it from data store affinityType="host" host="srv-hs5p8.gb1.brightbox.com" subnet=192.168.10.128/26 Jan 23 19:56:35.889990 containerd[1583]: 2026-01-23 19:56:35.681 [INFO][4090] ipam/ipam_block_reader_writer.go 247: Block is already claimed by this host, confirm the affinity affinityType="host" host="srv-hs5p8.gb1.brightbox.com" subnet=192.168.10.128/26 Jan 23 19:56:35.889990 containerd[1583]: 2026-01-23 19:56:35.682 [INFO][4090] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-hs5p8.gb1.brightbox.com" subnet=192.168.10.128/26 Jan 23 19:56:35.890654 containerd[1583]: 2026-01-23 19:56:35.686 [ERROR][4090] ipam/customresource.go 184: Error updating resource Key=BlockAffinity(srv-hs5p8.gb1.brightbox.com-192-168-10-128-26) Name="srv-hs5p8.gb1.brightbox.com-192-168-10-128-26" Resource="BlockAffinities" Value=&v3.BlockAffinity{TypeMeta:v1.TypeMeta{Kind:"BlockAffinity", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"srv-hs5p8.gb1.brightbox.com-192-168-10-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.BlockAffinitySpec{State:"confirmed", Node:"srv-hs5p8.gb1.brightbox.com", Type:"host", CIDR:"192.168.10.128/26", Deleted:"false"}} error=Operation cannot be fulfilled on blockaffinities.crd.projectcalico.org "srv-hs5p8.gb1.brightbox.com-192-168-10-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 23 19:56:35.890654 containerd[1583]: 2026-01-23 19:56:35.691 [INFO][4090] ipam/ipam_block_reader_writer.go 292: Affinity is already confirmed host="srv-hs5p8.gb1.brightbox.com" subnet=192.168.10.128/26 Jan 23 19:56:35.890654 containerd[1583]: 2026-01-23 19:56:35.691 [INFO][4090] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.890654 containerd[1583]: 2026-01-23 19:56:35.695 [INFO][4090] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c Jan 23 19:56:35.890654 containerd[1583]: 2026-01-23 19:56:35.701 [INFO][4090] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.707 [ERROR][4090] ipam/customresource.go 184: Error updating resource Key=IPAMBlock(192-168-10-128-26) Name="192-168-10-128-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-10-128-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.10.128/26", Affinity:(*string)(0xc0003e3760), Allocations:[]*int{(*int)(0xc0002d8ab8), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc00004f6d0), AttrSecondary:map[string]string{"namespace":"calico-apiserver", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"calico-apiserver-7fd877769c-tz6z7", "timestamp":"2026-01-23 19:56:35.554266814 +0000 UTC"}}}, SequenceNumber:0x188d7462a531300b, SequenceNumberForAllocation:map[string]uint64{"0":0x188d7462a531300a}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-10-128-26": the object has been modified; please apply your changes to the latest version and try again Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.707 [INFO][4090] ipam/ipam.go 1250: Failed to update block block=192.168.10.128/26 error=update conflict: IPAMBlock(192-168-10-128-26) handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.743 [INFO][4090] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.751 [INFO][4090] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.758 [INFO][4090] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.770 [INFO][4090] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.129/26] block=192.168.10.128/26 handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.770 [INFO][4090] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.129/26] handle="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.770 [INFO][4090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:35.891335 containerd[1583]: 2026-01-23 19:56:35.770 [INFO][4090] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.129/26] IPv6=[] ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" HandleID="k8s-pod-network.de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.892397 containerd[1583]: 2026-01-23 19:56:35.778 [INFO][4047] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0", GenerateName:"calico-apiserver-7fd877769c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e1c1444-00fb-4816-822a-67edc8d93d18", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd877769c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7fd877769c-tz6z7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif90fd35265b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:35.892397 containerd[1583]: 2026-01-23 19:56:35.779 [INFO][4047] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.129/32] ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.892397 containerd[1583]: 2026-01-23 19:56:35.779 [INFO][4047] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif90fd35265b ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.892397 containerd[1583]: 2026-01-23 19:56:35.810 [INFO][4047] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.892397 containerd[1583]: 2026-01-23 19:56:35.816 [INFO][4047] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0", GenerateName:"calico-apiserver-7fd877769c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e1c1444-00fb-4816-822a-67edc8d93d18", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd877769c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c", Pod:"calico-apiserver-7fd877769c-tz6z7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif90fd35265b", MAC:"fa:44:f8:d7:f8:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:35.892397 containerd[1583]: 2026-01-23 19:56:35.880 [INFO][4047] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-tz6z7" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--tz6z7-eth0" Jan 23 19:56:35.911183 containerd[1583]: time="2026-01-23T19:56:35.910615797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c4bc59d-89rwz,Uid:ab667679-fb0a-4ab2-a144-1015741c2ce8,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:36.014421 systemd-networkd[1500]: calida70f9d911c: Link UP Jan 23 19:56:36.018965 systemd-networkd[1500]: calida70f9d911c: Gained carrier Jan 23 19:56:36.064593 containerd[1583]: time="2026-01-23T19:56:36.064501967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-spxf6,Uid:ccd58232-7772-4e2c-865f-5e90b11eb5bb,Namespace:calico-apiserver,Attempt:0,}" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.211 [INFO][4048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.272 [INFO][4048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0 goldmane-666569f655- calico-system b9e2459e-4d22-438e-9c19-f8662b6a9620 850 0 2026-01-23 19:56:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com goldmane-666569f655-p9x6m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calida70f9d911c [] [] }} ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.272 [INFO][4048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.553 [INFO][4086] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" HandleID="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Workload="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.558 [INFO][4086] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" HandleID="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Workload="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003961c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"goldmane-666569f655-p9x6m", "timestamp":"2026-01-23 19:56:35.553987136 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.558 [INFO][4086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.771 [INFO][4086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.771 [INFO][4086] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.789 [INFO][4086] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.815 [INFO][4086] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.865 [INFO][4086] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.934 [INFO][4086] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.951 [INFO][4086] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.953 [INFO][4086] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.960 [INFO][4086] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8 Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.977 [INFO][4086] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.989 [INFO][4086] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.130/26] block=192.168.10.128/26 handle="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.989 [INFO][4086] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.130/26] handle="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.989 [INFO][4086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:36.068378 containerd[1583]: 2026-01-23 19:56:35.989 [INFO][4086] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.130/26] IPv6=[] ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" HandleID="k8s-pod-network.e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Workload="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.072101 containerd[1583]: 2026-01-23 19:56:36.002 [INFO][4048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b9e2459e-4d22-438e-9c19-f8662b6a9620", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-p9x6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida70f9d911c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.072101 containerd[1583]: 2026-01-23 19:56:36.002 [INFO][4048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.130/32] ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.072101 containerd[1583]: 2026-01-23 19:56:36.003 [INFO][4048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida70f9d911c ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.072101 containerd[1583]: 2026-01-23 19:56:36.021 [INFO][4048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.072101 containerd[1583]: 2026-01-23 19:56:36.022 [INFO][4048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"b9e2459e-4d22-438e-9c19-f8662b6a9620", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8", Pod:"goldmane-666569f655-p9x6m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida70f9d911c", MAC:"aa:47:d9:f3:8a:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.072101 containerd[1583]: 2026-01-23 19:56:36.050 [INFO][4048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" Namespace="calico-system" Pod="goldmane-666569f655-p9x6m" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-goldmane--666569f655--p9x6m-eth0" Jan 23 19:56:36.175929 systemd-networkd[1500]: cali79b20f26bf5: Link UP Jan 23 19:56:36.185393 systemd-networkd[1500]: cali79b20f26bf5: Gained carrier Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.255 [INFO][4067] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.294 [INFO][4067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0 coredns-668d6bf9bc- kube-system eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0 851 0 2026-01-23 19:55:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com coredns-668d6bf9bc-6brdf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79b20f26bf5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.296 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.554 [INFO][4089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" HandleID="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Workload="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.558 [INFO][4089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" HandleID="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Workload="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000330bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-6brdf", "timestamp":"2026-01-23 19:56:35.554300683 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.559 [INFO][4089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.989 [INFO][4089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:35.989 [INFO][4089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.012 [INFO][4089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.039 [INFO][4089] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.066 [INFO][4089] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.073 [INFO][4089] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.081 [INFO][4089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.081 [INFO][4089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.085 [INFO][4089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602 Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.097 [INFO][4089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.115 [INFO][4089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.131/26] block=192.168.10.128/26 handle="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.115 [INFO][4089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.131/26] handle="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.116 [INFO][4089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:36.235860 containerd[1583]: 2026-01-23 19:56:36.116 [INFO][4089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.131/26] IPv6=[] ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" HandleID="k8s-pod-network.a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Workload="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.238265 containerd[1583]: 2026-01-23 19:56:36.133 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-6brdf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79b20f26bf5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.238265 containerd[1583]: 2026-01-23 19:56:36.133 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.131/32] ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.238265 containerd[1583]: 2026-01-23 19:56:36.133 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79b20f26bf5 ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.238265 containerd[1583]: 2026-01-23 19:56:36.184 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.238265 containerd[1583]: 2026-01-23 19:56:36.188 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602", Pod:"coredns-668d6bf9bc-6brdf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79b20f26bf5", MAC:"c6:40:7f:0f:fd:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.238265 containerd[1583]: 2026-01-23 19:56:36.221 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" Namespace="kube-system" Pod="coredns-668d6bf9bc-6brdf" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6brdf-eth0" Jan 23 19:56:36.352783 containerd[1583]: time="2026-01-23T19:56:36.352712569Z" level=info msg="connecting to shim e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8" address="unix:///run/containerd/s/4b24b5825bc0d38ddf6a374947a51993014bdaf92ef8bc9389c872ab3f14c676" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:36.368184 containerd[1583]: time="2026-01-23T19:56:36.367485164Z" level=info msg="connecting to shim de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c" address="unix:///run/containerd/s/b5227a97bcf54f7ce1f97e35fccbd094252e99e2e399548c5284198267c52c45" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:36.383890 containerd[1583]: time="2026-01-23T19:56:36.383804005Z" level=info msg="connecting to shim a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602" address="unix:///run/containerd/s/dd83c1fec876fe19bbbb48107f5081f5128263904e5feb3835f7bc62e7640303" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:36.424645 systemd-networkd[1500]: cali957f0da4f84: Link UP Jan 23 19:56:36.425847 systemd-networkd[1500]: cali957f0da4f84: Gained carrier Jan 23 19:56:36.457364 systemd[1]: Started cri-containerd-e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8.scope - libcontainer container e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8. Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.082 [INFO][4128] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.138 [INFO][4128] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0 whisker-8c4bc59d- calico-system ab667679-fb0a-4ab2-a144-1015741c2ce8 929 0 2026-01-23 19:56:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8c4bc59d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com whisker-8c4bc59d-89rwz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali957f0da4f84 [] [] }} ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.138 [INFO][4128] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.298 [INFO][4167] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" HandleID="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Workload="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.299 [INFO][4167] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" HandleID="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Workload="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"whisker-8c4bc59d-89rwz", "timestamp":"2026-01-23 19:56:36.298944971 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.299 [INFO][4167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.299 [INFO][4167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.299 [INFO][4167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.326 [INFO][4167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.339 [INFO][4167] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.351 [INFO][4167] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.357 [INFO][4167] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.363 [INFO][4167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.363 [INFO][4167] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.369 [INFO][4167] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10 Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.381 [INFO][4167] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.398 [INFO][4167] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.132/26] block=192.168.10.128/26 handle="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.401 [INFO][4167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.132/26] handle="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.401 [INFO][4167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:36.481518 containerd[1583]: 2026-01-23 19:56:36.402 [INFO][4167] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.132/26] IPv6=[] ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" HandleID="k8s-pod-network.e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Workload="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.485015 containerd[1583]: 2026-01-23 19:56:36.410 [INFO][4128] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0", GenerateName:"whisker-8c4bc59d-", Namespace:"calico-system", SelfLink:"", UID:"ab667679-fb0a-4ab2-a144-1015741c2ce8", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8c4bc59d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"whisker-8c4bc59d-89rwz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali957f0da4f84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.485015 containerd[1583]: 2026-01-23 19:56:36.411 [INFO][4128] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.132/32] ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.485015 containerd[1583]: 2026-01-23 19:56:36.411 [INFO][4128] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali957f0da4f84 ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.485015 containerd[1583]: 2026-01-23 19:56:36.432 [INFO][4128] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.485015 containerd[1583]: 2026-01-23 19:56:36.438 [INFO][4128] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0", GenerateName:"whisker-8c4bc59d-", Namespace:"calico-system", SelfLink:"", UID:"ab667679-fb0a-4ab2-a144-1015741c2ce8", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8c4bc59d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10", Pod:"whisker-8c4bc59d-89rwz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali957f0da4f84", MAC:"be:92:5d:de:36:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.485015 containerd[1583]: 2026-01-23 19:56:36.472 [INFO][4128] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" Namespace="calico-system" Pod="whisker-8c4bc59d-89rwz" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-whisker--8c4bc59d--89rwz-eth0" Jan 23 19:56:36.535665 systemd[1]: Started cri-containerd-de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c.scope - libcontainer container de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c. Jan 23 19:56:36.550174 systemd[1]: Started cri-containerd-a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602.scope - libcontainer container a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602. Jan 23 19:56:36.575268 systemd-networkd[1500]: cali32684516038: Link UP Jan 23 19:56:36.577879 systemd-networkd[1500]: cali32684516038: Gained carrier Jan 23 19:56:36.601502 containerd[1583]: time="2026-01-23T19:56:36.601439203Z" level=info msg="connecting to shim e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10" address="unix:///run/containerd/s/f5c51a6d475d036815f7e64702253e97d8a827bbff2a23517ae38175083277c7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.153 [INFO][4150] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.194 [INFO][4150] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0 calico-apiserver-7fd877769c- calico-apiserver ccd58232-7772-4e2c-865f-5e90b11eb5bb 854 0 2026-01-23 19:56:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fd877769c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com calico-apiserver-7fd877769c-spxf6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32684516038 [] [] }} ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.194 [INFO][4150] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.341 [INFO][4176] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" HandleID="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.342 [INFO][4176] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" HandleID="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"calico-apiserver-7fd877769c-spxf6", "timestamp":"2026-01-23 19:56:36.34176525 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.342 [INFO][4176] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.404 [INFO][4176] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.404 [INFO][4176] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.446 [INFO][4176] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.466 [INFO][4176] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.486 [INFO][4176] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.500 [INFO][4176] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.520 [INFO][4176] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.522 [INFO][4176] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.527 [INFO][4176] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266 Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.538 [INFO][4176] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.551 [INFO][4176] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.133/26] block=192.168.10.128/26 handle="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.551 [INFO][4176] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.133/26] handle="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.551 [INFO][4176] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:36.620915 containerd[1583]: 2026-01-23 19:56:36.551 [INFO][4176] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.133/26] IPv6=[] ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" HandleID="k8s-pod-network.e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.626651 containerd[1583]: 2026-01-23 19:56:36.562 [INFO][4150] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0", GenerateName:"calico-apiserver-7fd877769c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ccd58232-7772-4e2c-865f-5e90b11eb5bb", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd877769c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7fd877769c-spxf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32684516038", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.626651 containerd[1583]: 2026-01-23 19:56:36.562 [INFO][4150] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.133/32] ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.626651 containerd[1583]: 2026-01-23 19:56:36.562 [INFO][4150] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32684516038 ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.626651 containerd[1583]: 2026-01-23 19:56:36.576 [INFO][4150] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.626651 containerd[1583]: 2026-01-23 19:56:36.580 [INFO][4150] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0", GenerateName:"calico-apiserver-7fd877769c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ccd58232-7772-4e2c-865f-5e90b11eb5bb", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fd877769c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266", Pod:"calico-apiserver-7fd877769c-spxf6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32684516038", MAC:"0a:e0:fb:33:ce:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:36.626651 containerd[1583]: 2026-01-23 19:56:36.612 [INFO][4150] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" Namespace="calico-apiserver" Pod="calico-apiserver-7fd877769c-spxf6" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--apiserver--7fd877769c--spxf6-eth0" Jan 23 19:56:36.661313 systemd[1]: Started cri-containerd-e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10.scope - libcontainer container e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10. Jan 23 19:56:36.669657 containerd[1583]: time="2026-01-23T19:56:36.669608538Z" level=info msg="connecting to shim e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266" address="unix:///run/containerd/s/e131ed2e84673e5c63f1e1b9100ebd636e3a77126457141f19f840cde76ebda3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:36.715865 containerd[1583]: time="2026-01-23T19:56:36.712773037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6brdf,Uid:eb24c3b7-92a2-4394-bf0d-a2b1ca00f1f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602\"" Jan 23 19:56:36.730850 containerd[1583]: time="2026-01-23T19:56:36.730135398Z" level=info msg="CreateContainer within sandbox \"a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:56:36.781115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947977969.mount: Deactivated successfully. Jan 23 19:56:36.815956 containerd[1583]: time="2026-01-23T19:56:36.812002492Z" level=info msg="Container 0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:56:36.817379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2968890544.mount: Deactivated successfully. Jan 23 19:56:36.838083 systemd[1]: Started cri-containerd-e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266.scope - libcontainer container e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266. Jan 23 19:56:36.851921 containerd[1583]: time="2026-01-23T19:56:36.851861120Z" level=info msg="CreateContainer within sandbox \"a57eb9eefaf00f71ee963e76179c08f4e9f503f64bd1d7414546a1a856f99602\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f\"" Jan 23 19:56:36.857780 containerd[1583]: time="2026-01-23T19:56:36.857514376Z" level=info msg="StartContainer for \"0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f\"" Jan 23 19:56:36.858847 containerd[1583]: time="2026-01-23T19:56:36.858770071Z" level=info msg="connecting to shim 0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f" address="unix:///run/containerd/s/dd83c1fec876fe19bbbb48107f5081f5128263904e5feb3835f7bc62e7640303" protocol=ttrpc version=3 Jan 23 19:56:36.942001 containerd[1583]: time="2026-01-23T19:56:36.941942718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-p9x6m,Uid:b9e2459e-4d22-438e-9c19-f8662b6a9620,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6df874013f70a556b8e496f9270928e8c7a97ea9f758cb9678714a9f10de6e8\"" Jan 23 19:56:36.953845 containerd[1583]: time="2026-01-23T19:56:36.952676004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:56:36.996017 systemd[1]: Started cri-containerd-0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f.scope - libcontainer container 0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f. Jan 23 19:56:37.030716 containerd[1583]: time="2026-01-23T19:56:37.030669787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8c4bc59d-89rwz,Uid:ab667679-fb0a-4ab2-a144-1015741c2ce8,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7c175aa5ec3556515e51a90b2bc87dc26776a8e7e61e10d003fca20a15dbe10\"" Jan 23 19:56:37.033694 containerd[1583]: time="2026-01-23T19:56:37.032318361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-tz6z7,Uid:4e1c1444-00fb-4816-822a-67edc8d93d18,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"de34b979bb64dcdeb26278034d30d6569c2345446d54e3ba81e212b385e4620c\"" Jan 23 19:56:37.074852 kubelet[2892]: I0123 19:56:37.071576 2892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a546b487-6fae-4b8e-972b-2d2526b18e04" path="/var/lib/kubelet/pods/a546b487-6fae-4b8e-972b-2d2526b18e04/volumes" Jan 23 19:56:37.110844 containerd[1583]: time="2026-01-23T19:56:37.110756047Z" level=info msg="StartContainer for \"0d9b89d9c8404d1068bbf32f62343ad11c01a581f0841558aacb320f904ab44f\" returns successfully" Jan 23 19:56:37.128904 containerd[1583]: time="2026-01-23T19:56:37.128768644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fd877769c-spxf6,Uid:ccd58232-7772-4e2c-865f-5e90b11eb5bb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e4d7ec0112e775f5eb41613526ccc83d9a9e9477ace20fa291d5688cc7d12266\"" Jan 23 19:56:37.327769 containerd[1583]: time="2026-01-23T19:56:37.327287251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:37.329905 containerd[1583]: time="2026-01-23T19:56:37.329789524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:56:37.330301 containerd[1583]: time="2026-01-23T19:56:37.330042886Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:56:37.332144 kubelet[2892]: E0123 19:56:37.332067 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:56:37.332365 kubelet[2892]: E0123 19:56:37.332328 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:56:37.356668 kubelet[2892]: E0123 19:56:37.356187 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p9x6m_calico-system(b9e2459e-4d22-438e-9c19-f8662b6a9620): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:37.358746 containerd[1583]: time="2026-01-23T19:56:37.358448604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:56:37.364799 systemd-networkd[1500]: calif90fd35265b: Gained IPv6LL Jan 23 19:56:37.368618 kubelet[2892]: E0123 19:56:37.368471 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:56:37.488059 systemd-networkd[1500]: calida70f9d911c: Gained IPv6LL Jan 23 19:56:37.510489 kubelet[2892]: E0123 19:56:37.510362 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:56:37.537657 kubelet[2892]: I0123 19:56:37.536968 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6brdf" podStartSLOduration=50.536947136 podStartE2EDuration="50.536947136s" podCreationTimestamp="2026-01-23 19:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:56:37.534196586 +0000 UTC m=+56.641766270" watchObservedRunningTime="2026-01-23 19:56:37.536947136 +0000 UTC m=+56.644516814" Jan 23 19:56:37.540824 kubelet[2892]: I0123 19:56:37.539991 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 19:56:37.616036 systemd-networkd[1500]: cali79b20f26bf5: Gained IPv6LL Jan 23 19:56:38.447992 systemd-networkd[1500]: cali957f0da4f84: Gained IPv6LL Jan 23 19:56:38.512027 systemd-networkd[1500]: cali32684516038: Gained IPv6LL Jan 23 19:56:38.513697 kubelet[2892]: E0123 19:56:38.513642 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:56:38.741501 containerd[1583]: time="2026-01-23T19:56:38.741281616Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:38.742972 containerd[1583]: time="2026-01-23T19:56:38.742787316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:56:38.743501 containerd[1583]: time="2026-01-23T19:56:38.742834644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:56:38.744008 kubelet[2892]: E0123 19:56:38.743878 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:56:38.744386 kubelet[2892]: E0123 19:56:38.744122 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:56:38.744640 kubelet[2892]: E0123 19:56:38.744532 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8dd94f387d0441c7a2d0c496a36edcb6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:38.745662 containerd[1583]: time="2026-01-23T19:56:38.745466827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:56:38.895976 systemd-networkd[1500]: vxlan.calico: Link UP Jan 23 19:56:38.895987 systemd-networkd[1500]: vxlan.calico: Gained carrier Jan 23 19:56:39.067976 containerd[1583]: time="2026-01-23T19:56:39.067518447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:39.071591 containerd[1583]: time="2026-01-23T19:56:39.070541242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:56:39.071721 containerd[1583]: time="2026-01-23T19:56:39.071679898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:56:39.074421 kubelet[2892]: E0123 19:56:39.074151 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:39.074421 kubelet[2892]: E0123 19:56:39.074206 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:39.075518 kubelet[2892]: E0123 19:56:39.075458 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkxlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-tz6z7_calico-apiserver(4e1c1444-00fb-4816-822a-67edc8d93d18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:39.077830 containerd[1583]: time="2026-01-23T19:56:39.077186263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:56:39.077930 kubelet[2892]: E0123 19:56:39.077617 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:56:39.090517 containerd[1583]: time="2026-01-23T19:56:39.090288079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kd6b5,Uid:326ff855-4972-4e46-8b78-1902cd53ddd3,Namespace:kube-system,Attempt:0,}" Jan 23 19:56:39.310431 systemd-networkd[1500]: cali9cb2fa32aaf: Link UP Jan 23 19:56:39.312118 systemd-networkd[1500]: cali9cb2fa32aaf: Gained carrier Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.179 [INFO][4687] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0 coredns-668d6bf9bc- kube-system 326ff855-4972-4e46-8b78-1902cd53ddd3 846 0 2026-01-23 19:55:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com coredns-668d6bf9bc-kd6b5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9cb2fa32aaf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.181 [INFO][4687] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.240 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" HandleID="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Workload="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.240 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" HandleID="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Workload="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-kd6b5", "timestamp":"2026-01-23 19:56:39.240624151 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.241 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.241 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.241 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.252 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.261 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.270 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.273 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.278 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.278 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.281 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2 Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.288 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.300 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.134/26] block=192.168.10.128/26 handle="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.301 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.134/26] handle="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.301 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:39.342537 containerd[1583]: 2026-01-23 19:56:39.301 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.134/26] IPv6=[] ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" HandleID="k8s-pod-network.96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Workload="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.345949 containerd[1583]: 2026-01-23 19:56:39.305 [INFO][4687] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"326ff855-4972-4e46-8b78-1902cd53ddd3", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-kd6b5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cb2fa32aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:39.345949 containerd[1583]: 2026-01-23 19:56:39.306 [INFO][4687] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.134/32] ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.345949 containerd[1583]: 2026-01-23 19:56:39.306 [INFO][4687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cb2fa32aaf ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.345949 containerd[1583]: 2026-01-23 19:56:39.312 [INFO][4687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.345949 containerd[1583]: 2026-01-23 19:56:39.313 [INFO][4687] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"326ff855-4972-4e46-8b78-1902cd53ddd3", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 55, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2", Pod:"coredns-668d6bf9bc-kd6b5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9cb2fa32aaf", MAC:"f6:05:b7:05:10:e1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:39.345949 containerd[1583]: 2026-01-23 19:56:39.331 [INFO][4687] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" Namespace="kube-system" Pod="coredns-668d6bf9bc-kd6b5" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--kd6b5-eth0" Jan 23 19:56:39.387802 containerd[1583]: time="2026-01-23T19:56:39.387469602Z" level=info msg="connecting to shim 96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2" address="unix:///run/containerd/s/5f3ee366945d504980aca9188dbe928e036bb9b63c6406927568e02777263f85" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:39.391841 containerd[1583]: time="2026-01-23T19:56:39.391749113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:39.394121 containerd[1583]: time="2026-01-23T19:56:39.394049501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:56:39.394377 containerd[1583]: time="2026-01-23T19:56:39.394254012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:56:39.395850 kubelet[2892]: E0123 19:56:39.394793 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:39.395850 kubelet[2892]: E0123 19:56:39.394937 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:39.395850 kubelet[2892]: E0123 19:56:39.395336 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq2jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-spxf6_calico-apiserver(ccd58232-7772-4e2c-865f-5e90b11eb5bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:39.396983 kubelet[2892]: E0123 19:56:39.396780 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:56:39.398060 containerd[1583]: time="2026-01-23T19:56:39.397860341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:56:39.453158 systemd[1]: Started cri-containerd-96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2.scope - libcontainer container 96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2. Jan 23 19:56:39.519371 kubelet[2892]: E0123 19:56:39.519099 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:56:39.521625 kubelet[2892]: E0123 19:56:39.521582 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:56:39.603217 containerd[1583]: time="2026-01-23T19:56:39.602414235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kd6b5,Uid:326ff855-4972-4e46-8b78-1902cd53ddd3,Namespace:kube-system,Attempt:0,} returns sandbox id \"96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2\"" Jan 23 19:56:39.633712 containerd[1583]: time="2026-01-23T19:56:39.632789143Z" level=info msg="CreateContainer within sandbox \"96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 19:56:39.663427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013276911.mount: Deactivated successfully. Jan 23 19:56:39.666837 containerd[1583]: time="2026-01-23T19:56:39.665657908Z" level=info msg="Container b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3: CDI devices from CRI Config.CDIDevices: []" Jan 23 19:56:39.676837 containerd[1583]: time="2026-01-23T19:56:39.676135948Z" level=info msg="CreateContainer within sandbox \"96ea33a64be39364d54a760149346e74cba7678b53d163b8a124acf7687dffb2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3\"" Jan 23 19:56:39.678693 containerd[1583]: time="2026-01-23T19:56:39.678615793Z" level=info msg="StartContainer for \"b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3\"" Jan 23 19:56:39.681792 containerd[1583]: time="2026-01-23T19:56:39.681704243Z" level=info msg="connecting to shim b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3" address="unix:///run/containerd/s/5f3ee366945d504980aca9188dbe928e036bb9b63c6406927568e02777263f85" protocol=ttrpc version=3 Jan 23 19:56:39.722545 systemd[1]: Started cri-containerd-b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3.scope - libcontainer container b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3. Jan 23 19:56:39.754226 containerd[1583]: time="2026-01-23T19:56:39.754068807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:39.759944 containerd[1583]: time="2026-01-23T19:56:39.757167919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:56:39.759944 containerd[1583]: time="2026-01-23T19:56:39.757317009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:56:39.762189 kubelet[2892]: E0123 19:56:39.760498 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:56:39.762189 kubelet[2892]: E0123 19:56:39.760578 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:56:39.763397 kubelet[2892]: E0123 19:56:39.761009 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:39.765548 kubelet[2892]: E0123 19:56:39.765345 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:56:39.823657 containerd[1583]: time="2026-01-23T19:56:39.821689388Z" level=info msg="StartContainer for \"b25a362da6300100d42675dfef26d2857de57e20235952fe5e6a73c774338fd3\" returns successfully" Jan 23 19:56:40.524498 kubelet[2892]: E0123 19:56:40.524338 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:56:40.565714 kubelet[2892]: I0123 19:56:40.565610 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kd6b5" podStartSLOduration=53.565586394 podStartE2EDuration="53.565586394s" podCreationTimestamp="2026-01-23 19:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 19:56:40.563394381 +0000 UTC m=+59.670964063" watchObservedRunningTime="2026-01-23 19:56:40.565586394 +0000 UTC m=+59.673156072" Jan 23 19:56:40.752394 systemd-networkd[1500]: vxlan.calico: Gained IPv6LL Jan 23 19:56:41.136979 systemd-networkd[1500]: cali9cb2fa32aaf: Gained IPv6LL Jan 23 19:56:46.064954 containerd[1583]: time="2026-01-23T19:56:46.064860348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66675fd984-tkh5k,Uid:ef88c33e-7bcc-4e40-8e39-a5221bbcac5a,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:46.256275 systemd-networkd[1500]: cali553aa675a3a: Link UP Jan 23 19:56:46.258031 systemd-networkd[1500]: cali553aa675a3a: Gained carrier Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.130 [INFO][4862] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0 calico-kube-controllers-66675fd984- calico-system ef88c33e-7bcc-4e40-8e39-a5221bbcac5a 853 0 2026-01-23 19:56:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66675fd984 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com calico-kube-controllers-66675fd984-tkh5k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali553aa675a3a [] [] }} ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.131 [INFO][4862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.182 [INFO][4874] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" HandleID="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.182 [INFO][4874] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" HandleID="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332a60), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"calico-kube-controllers-66675fd984-tkh5k", "timestamp":"2026-01-23 19:56:46.182660649 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.183 [INFO][4874] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.183 [INFO][4874] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.183 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.196 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.207 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.214 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.217 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.222 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.222 [INFO][4874] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.225 [INFO][4874] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10 Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.233 [INFO][4874] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.244 [INFO][4874] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.135/26] block=192.168.10.128/26 handle="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.244 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.135/26] handle="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.244 [INFO][4874] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:46.291059 containerd[1583]: 2026-01-23 19:56:46.244 [INFO][4874] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.135/26] IPv6=[] ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" HandleID="k8s-pod-network.6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Workload="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.292760 containerd[1583]: 2026-01-23 19:56:46.249 [INFO][4862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0", GenerateName:"calico-kube-controllers-66675fd984-", Namespace:"calico-system", SelfLink:"", UID:"ef88c33e-7bcc-4e40-8e39-a5221bbcac5a", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66675fd984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-66675fd984-tkh5k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali553aa675a3a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:46.292760 containerd[1583]: 2026-01-23 19:56:46.250 [INFO][4862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.135/32] ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.292760 containerd[1583]: 2026-01-23 19:56:46.250 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali553aa675a3a ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.292760 containerd[1583]: 2026-01-23 19:56:46.256 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.292760 containerd[1583]: 2026-01-23 19:56:46.260 [INFO][4862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0", GenerateName:"calico-kube-controllers-66675fd984-", Namespace:"calico-system", SelfLink:"", UID:"ef88c33e-7bcc-4e40-8e39-a5221bbcac5a", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66675fd984", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10", Pod:"calico-kube-controllers-66675fd984-tkh5k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali553aa675a3a", MAC:"9e:ef:4c:39:e1:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:46.292760 containerd[1583]: 2026-01-23 19:56:46.284 [INFO][4862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" Namespace="calico-system" Pod="calico-kube-controllers-66675fd984-tkh5k" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-calico--kube--controllers--66675fd984--tkh5k-eth0" Jan 23 19:56:46.337336 containerd[1583]: time="2026-01-23T19:56:46.337024907Z" level=info msg="connecting to shim 6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10" address="unix:///run/containerd/s/ccbc98af0677f13eeb90cf50cb5cd77e9a980dee34fdaa13f9e0b5c74ae354e7" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:46.398031 systemd[1]: Started cri-containerd-6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10.scope - libcontainer container 6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10. Jan 23 19:56:46.496505 containerd[1583]: time="2026-01-23T19:56:46.496448667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66675fd984-tkh5k,Uid:ef88c33e-7bcc-4e40-8e39-a5221bbcac5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ecd531d4291f3a63e28201babd9bfe674d727215fa0dd2673e01677e08a9c10\"" Jan 23 19:56:46.501288 containerd[1583]: time="2026-01-23T19:56:46.501223362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:56:46.833699 containerd[1583]: time="2026-01-23T19:56:46.833538951Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:46.841939 containerd[1583]: time="2026-01-23T19:56:46.841869667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:56:46.842045 containerd[1583]: time="2026-01-23T19:56:46.841897111Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:56:46.842651 kubelet[2892]: E0123 19:56:46.842420 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:56:46.842651 kubelet[2892]: E0123 19:56:46.842528 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:56:46.843205 kubelet[2892]: E0123 19:56:46.842881 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbhxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:46.844362 kubelet[2892]: E0123 19:56:46.844289 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:56:47.547890 kubelet[2892]: E0123 19:56:47.546750 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:56:47.857103 systemd-networkd[1500]: cali553aa675a3a: Gained IPv6LL Jan 23 19:56:49.066036 containerd[1583]: time="2026-01-23T19:56:49.065964455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gplb,Uid:981744d6-418c-41e4-8d22-4fb530fbf1db,Namespace:calico-system,Attempt:0,}" Jan 23 19:56:49.256562 systemd-networkd[1500]: calie59ade7700f: Link UP Jan 23 19:56:49.260703 systemd-networkd[1500]: calie59ade7700f: Gained carrier Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.132 [INFO][4939] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0 csi-node-driver- calico-system 981744d6-418c-41e4-8d22-4fb530fbf1db 734 0 2026-01-23 19:56:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-hs5p8.gb1.brightbox.com csi-node-driver-4gplb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie59ade7700f [] [] }} ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.132 [INFO][4939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.180 [INFO][4951] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" HandleID="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Workload="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.180 [INFO][4951] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" HandleID="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Workload="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hs5p8.gb1.brightbox.com", "pod":"csi-node-driver-4gplb", "timestamp":"2026-01-23 19:56:49.180102021 +0000 UTC"}, Hostname:"srv-hs5p8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.180 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.180 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.180 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hs5p8.gb1.brightbox.com' Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.192 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.207 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.215 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.219 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.224 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.10.128/26 host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.224 [INFO][4951] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.10.128/26 handle="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.227 [INFO][4951] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03 Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.235 [INFO][4951] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.10.128/26 handle="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.244 [INFO][4951] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.10.136/26] block=192.168.10.128/26 handle="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.244 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.10.136/26] handle="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" host="srv-hs5p8.gb1.brightbox.com" Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.244 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 19:56:49.291765 containerd[1583]: 2026-01-23 19:56:49.244 [INFO][4951] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.10.136/26] IPv6=[] ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" HandleID="k8s-pod-network.5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Workload="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.294905 containerd[1583]: 2026-01-23 19:56:49.248 [INFO][4939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"981744d6-418c-41e4-8d22-4fb530fbf1db", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-4gplb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie59ade7700f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:49.294905 containerd[1583]: 2026-01-23 19:56:49.249 [INFO][4939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.136/32] ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.294905 containerd[1583]: 2026-01-23 19:56:49.249 [INFO][4939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie59ade7700f ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.294905 containerd[1583]: 2026-01-23 19:56:49.260 [INFO][4939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.294905 containerd[1583]: 2026-01-23 19:56:49.263 [INFO][4939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"981744d6-418c-41e4-8d22-4fb530fbf1db", ResourceVersion:"734", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 19, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hs5p8.gb1.brightbox.com", ContainerID:"5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03", Pod:"csi-node-driver-4gplb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie59ade7700f", MAC:"3a:29:7b:76:d4:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 19:56:49.294905 containerd[1583]: 2026-01-23 19:56:49.286 [INFO][4939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" Namespace="calico-system" Pod="csi-node-driver-4gplb" WorkloadEndpoint="srv--hs5p8.gb1.brightbox.com-k8s-csi--node--driver--4gplb-eth0" Jan 23 19:56:49.359410 containerd[1583]: time="2026-01-23T19:56:49.359106255Z" level=info msg="connecting to shim 5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03" address="unix:///run/containerd/s/57c3533adbfe0dcc1818f90b6f49f5c88d3d091ec7689778969c1fac81a42d00" namespace=k8s.io protocol=ttrpc version=3 Jan 23 19:56:49.406219 systemd[1]: Started cri-containerd-5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03.scope - libcontainer container 5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03. Jan 23 19:56:49.460769 containerd[1583]: time="2026-01-23T19:56:49.460669428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4gplb,Uid:981744d6-418c-41e4-8d22-4fb530fbf1db,Namespace:calico-system,Attempt:0,} returns sandbox id \"5924e5f53fe7495ff272908beddd2a4e90ee8adf347402eb0a2aca2bb623fc03\"" Jan 23 19:56:49.463632 containerd[1583]: time="2026-01-23T19:56:49.463501722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:56:49.789797 containerd[1583]: time="2026-01-23T19:56:49.789570026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:49.790962 containerd[1583]: time="2026-01-23T19:56:49.790903708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:56:49.791156 containerd[1583]: time="2026-01-23T19:56:49.790909886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:56:49.791855 kubelet[2892]: E0123 19:56:49.791637 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:56:49.791855 kubelet[2892]: E0123 19:56:49.791723 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:56:49.792573 kubelet[2892]: E0123 19:56:49.792040 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:49.795648 containerd[1583]: time="2026-01-23T19:56:49.795603896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:56:50.100387 containerd[1583]: time="2026-01-23T19:56:50.100076973Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:50.101755 containerd[1583]: time="2026-01-23T19:56:50.101614242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:56:50.101755 containerd[1583]: time="2026-01-23T19:56:50.101704688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:56:50.102171 kubelet[2892]: E0123 19:56:50.102048 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:56:50.102289 kubelet[2892]: E0123 19:56:50.102179 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:56:50.103367 kubelet[2892]: E0123 19:56:50.103154 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:50.104511 kubelet[2892]: E0123 19:56:50.104432 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:50.480137 systemd-networkd[1500]: calie59ade7700f: Gained IPv6LL Jan 23 19:56:50.568632 kubelet[2892]: E0123 19:56:50.568521 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:56:52.065606 containerd[1583]: time="2026-01-23T19:56:52.065547272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:56:52.392533 containerd[1583]: time="2026-01-23T19:56:52.392388821Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:52.394850 containerd[1583]: time="2026-01-23T19:56:52.394157896Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:56:52.394850 containerd[1583]: time="2026-01-23T19:56:52.394239713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:56:52.395001 kubelet[2892]: E0123 19:56:52.394497 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:52.395001 kubelet[2892]: E0123 19:56:52.394575 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:52.396257 kubelet[2892]: E0123 19:56:52.394767 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq2jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-spxf6_calico-apiserver(ccd58232-7772-4e2c-865f-5e90b11eb5bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:52.396872 kubelet[2892]: E0123 19:56:52.396823 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:56:53.069117 containerd[1583]: time="2026-01-23T19:56:53.069038890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:56:53.376148 containerd[1583]: time="2026-01-23T19:56:53.376062012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:53.377531 containerd[1583]: time="2026-01-23T19:56:53.377482873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:56:53.377727 containerd[1583]: time="2026-01-23T19:56:53.377500486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:56:53.377942 kubelet[2892]: E0123 19:56:53.377841 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:53.378030 kubelet[2892]: E0123 19:56:53.377964 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:56:53.378818 containerd[1583]: time="2026-01-23T19:56:53.378527183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:56:53.379689 kubelet[2892]: E0123 19:56:53.379622 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkxlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-tz6z7_calico-apiserver(4e1c1444-00fb-4816-822a-67edc8d93d18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:53.381204 kubelet[2892]: E0123 19:56:53.380843 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:56:53.717180 containerd[1583]: time="2026-01-23T19:56:53.716933210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:53.719668 containerd[1583]: time="2026-01-23T19:56:53.719506209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:56:53.719921 containerd[1583]: time="2026-01-23T19:56:53.719630541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:56:53.720308 kubelet[2892]: E0123 19:56:53.720222 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:56:53.720774 kubelet[2892]: E0123 19:56:53.720318 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:56:53.721981 kubelet[2892]: E0123 19:56:53.720599 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p9x6m_calico-system(b9e2459e-4d22-438e-9c19-f8662b6a9620): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:53.723434 kubelet[2892]: E0123 19:56:53.723346 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:56:55.066716 containerd[1583]: time="2026-01-23T19:56:55.066463702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:56:55.383643 containerd[1583]: time="2026-01-23T19:56:55.383566231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:55.385142 containerd[1583]: time="2026-01-23T19:56:55.385093208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:56:55.385331 containerd[1583]: time="2026-01-23T19:56:55.385228811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:56:55.385576 kubelet[2892]: E0123 19:56:55.385524 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:56:55.386484 kubelet[2892]: E0123 19:56:55.386189 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:56:55.386484 kubelet[2892]: E0123 19:56:55.386404 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8dd94f387d0441c7a2d0c496a36edcb6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:55.388876 containerd[1583]: time="2026-01-23T19:56:55.388823157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:56:55.718518 containerd[1583]: time="2026-01-23T19:56:55.718272933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:56:55.720012 containerd[1583]: time="2026-01-23T19:56:55.719933815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:56:55.720119 containerd[1583]: time="2026-01-23T19:56:55.720068552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:56:55.720540 kubelet[2892]: E0123 19:56:55.720478 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:56:55.720668 kubelet[2892]: E0123 19:56:55.720558 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:56:55.720844 kubelet[2892]: E0123 19:56:55.720759 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:56:55.722182 kubelet[2892]: E0123 19:56:55.722136 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:57:02.066993 containerd[1583]: time="2026-01-23T19:57:02.066897281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:57:02.386291 containerd[1583]: time="2026-01-23T19:57:02.385993742Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:02.387869 containerd[1583]: time="2026-01-23T19:57:02.387654165Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:57:02.388301 containerd[1583]: time="2026-01-23T19:57:02.387845516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:57:02.389081 kubelet[2892]: E0123 19:57:02.388524 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:57:02.389081 kubelet[2892]: E0123 19:57:02.388674 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:57:02.389081 kubelet[2892]: E0123 19:57:02.388948 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbhxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:02.390518 kubelet[2892]: E0123 19:57:02.390168 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:57:04.069015 kubelet[2892]: E0123 19:57:04.068941 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:57:05.068765 containerd[1583]: time="2026-01-23T19:57:05.068700648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:57:05.381286 containerd[1583]: time="2026-01-23T19:57:05.381206959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:05.383078 containerd[1583]: time="2026-01-23T19:57:05.383021229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:57:05.383191 containerd[1583]: time="2026-01-23T19:57:05.383134065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:57:05.383628 kubelet[2892]: E0123 19:57:05.383565 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:57:05.384978 kubelet[2892]: E0123 19:57:05.383906 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:57:05.384978 kubelet[2892]: E0123 19:57:05.384149 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:05.387187 containerd[1583]: time="2026-01-23T19:57:05.387158985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:57:05.695917 containerd[1583]: time="2026-01-23T19:57:05.695586335Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:05.696976 containerd[1583]: time="2026-01-23T19:57:05.696848992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:57:05.696976 containerd[1583]: time="2026-01-23T19:57:05.696933140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:57:05.697829 kubelet[2892]: E0123 19:57:05.697643 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:57:05.697829 kubelet[2892]: E0123 19:57:05.697722 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:57:05.698283 kubelet[2892]: E0123 19:57:05.698181 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:05.699869 kubelet[2892]: E0123 19:57:05.699750 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:57:07.005158 systemd[1]: Started sshd@9-10.230.78.134:22-68.220.241.50:43764.service - OpenSSH per-connection server daemon (68.220.241.50:43764). Jan 23 19:57:07.632993 sshd[5033]: Accepted publickey for core from 68.220.241.50 port 43764 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:07.636183 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:07.645897 systemd-logind[1561]: New session 12 of user core. Jan 23 19:57:07.652430 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 19:57:08.072722 kubelet[2892]: E0123 19:57:08.072115 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:57:08.100173 kubelet[2892]: E0123 19:57:08.100078 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:57:08.881670 sshd[5036]: Connection closed by 68.220.241.50 port 43764 Jan 23 19:57:08.882528 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:08.899631 systemd[1]: sshd@9-10.230.78.134:22-68.220.241.50:43764.service: Deactivated successfully. Jan 23 19:57:08.904007 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 19:57:08.907142 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 23 19:57:08.910365 systemd-logind[1561]: Removed session 12. Jan 23 19:57:09.067299 kubelet[2892]: E0123 19:57:09.067099 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:57:13.989035 systemd[1]: Started sshd@10-10.230.78.134:22-68.220.241.50:53094.service - OpenSSH per-connection server daemon (68.220.241.50:53094). Jan 23 19:57:14.624005 sshd[5076]: Accepted publickey for core from 68.220.241.50 port 53094 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:14.626487 sshd-session[5076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:14.634978 systemd-logind[1561]: New session 13 of user core. Jan 23 19:57:14.642090 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 19:57:15.067606 kubelet[2892]: E0123 19:57:15.067260 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:57:15.267999 sshd[5079]: Connection closed by 68.220.241.50 port 53094 Jan 23 19:57:15.270354 sshd-session[5076]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:15.277038 systemd[1]: sshd@10-10.230.78.134:22-68.220.241.50:53094.service: Deactivated successfully. Jan 23 19:57:15.277654 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 23 19:57:15.281378 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 19:57:15.285378 systemd-logind[1561]: Removed session 13. Jan 23 19:57:19.068703 containerd[1583]: time="2026-01-23T19:57:19.068059285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:57:19.400452 containerd[1583]: time="2026-01-23T19:57:19.400226828Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:19.401913 containerd[1583]: time="2026-01-23T19:57:19.401830423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:57:19.402013 containerd[1583]: time="2026-01-23T19:57:19.401881863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:57:19.402435 kubelet[2892]: E0123 19:57:19.402306 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:57:19.402435 kubelet[2892]: E0123 19:57:19.402401 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:57:19.404615 containerd[1583]: time="2026-01-23T19:57:19.404573489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:57:19.411795 kubelet[2892]: E0123 19:57:19.411723 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8dd94f387d0441c7a2d0c496a36edcb6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:19.707287 containerd[1583]: time="2026-01-23T19:57:19.706920211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:19.708874 containerd[1583]: time="2026-01-23T19:57:19.708332462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:57:19.708874 containerd[1583]: time="2026-01-23T19:57:19.708400895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:57:19.709333 kubelet[2892]: E0123 19:57:19.709180 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:57:19.709333 kubelet[2892]: E0123 19:57:19.709263 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:57:19.710318 containerd[1583]: time="2026-01-23T19:57:19.710286427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:57:19.710742 kubelet[2892]: E0123 19:57:19.709659 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq2jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-spxf6_calico-apiserver(ccd58232-7772-4e2c-865f-5e90b11eb5bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:19.711934 kubelet[2892]: E0123 19:57:19.711862 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:57:20.016287 containerd[1583]: time="2026-01-23T19:57:20.015456680Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:20.017085 containerd[1583]: time="2026-01-23T19:57:20.017026487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:57:20.017176 containerd[1583]: time="2026-01-23T19:57:20.017151728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:57:20.017832 kubelet[2892]: E0123 19:57:20.017434 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:57:20.017832 kubelet[2892]: E0123 19:57:20.017516 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:57:20.017832 kubelet[2892]: E0123 19:57:20.017683 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:20.019295 kubelet[2892]: E0123 19:57:20.019250 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:57:20.372215 systemd[1]: Started sshd@11-10.230.78.134:22-68.220.241.50:53110.service - OpenSSH per-connection server daemon (68.220.241.50:53110). Jan 23 19:57:20.965907 sshd[5100]: Accepted publickey for core from 68.220.241.50 port 53110 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:20.971184 sshd-session[5100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:20.980291 systemd-logind[1561]: New session 14 of user core. Jan 23 19:57:20.987250 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 19:57:21.073119 kubelet[2892]: E0123 19:57:21.073012 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:57:21.479254 sshd[5103]: Connection closed by 68.220.241.50 port 53110 Jan 23 19:57:21.478775 sshd-session[5100]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:21.486312 systemd[1]: sshd@11-10.230.78.134:22-68.220.241.50:53110.service: Deactivated successfully. Jan 23 19:57:21.491509 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 19:57:21.494906 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 23 19:57:21.496935 systemd-logind[1561]: Removed session 14. Jan 23 19:57:21.582986 systemd[1]: Started sshd@12-10.230.78.134:22-68.220.241.50:53114.service - OpenSSH per-connection server daemon (68.220.241.50:53114). Jan 23 19:57:22.068640 containerd[1583]: time="2026-01-23T19:57:22.068556995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:57:22.175665 sshd[5115]: Accepted publickey for core from 68.220.241.50 port 53114 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:22.176641 sshd-session[5115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:22.191883 systemd-logind[1561]: New session 15 of user core. Jan 23 19:57:22.203098 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 19:57:22.383540 containerd[1583]: time="2026-01-23T19:57:22.383269094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:22.384689 containerd[1583]: time="2026-01-23T19:57:22.384557964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:57:22.384689 containerd[1583]: time="2026-01-23T19:57:22.384616650Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:57:22.385835 kubelet[2892]: E0123 19:57:22.385073 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:57:22.385835 kubelet[2892]: E0123 19:57:22.385153 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:57:22.385835 kubelet[2892]: E0123 19:57:22.385489 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p9x6m_calico-system(b9e2459e-4d22-438e-9c19-f8662b6a9620): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:22.386533 containerd[1583]: time="2026-01-23T19:57:22.386203258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:57:22.386789 kubelet[2892]: E0123 19:57:22.386737 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:57:22.693912 containerd[1583]: time="2026-01-23T19:57:22.693341327Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:22.694904 containerd[1583]: time="2026-01-23T19:57:22.694836861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:57:22.695013 containerd[1583]: time="2026-01-23T19:57:22.694952720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:57:22.695446 kubelet[2892]: E0123 19:57:22.695354 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:57:22.695853 kubelet[2892]: E0123 19:57:22.695450 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:57:22.695853 kubelet[2892]: E0123 19:57:22.695658 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkxlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-tz6z7_calico-apiserver(4e1c1444-00fb-4816-822a-67edc8d93d18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:22.697038 kubelet[2892]: E0123 19:57:22.696911 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:57:22.758575 sshd[5118]: Connection closed by 68.220.241.50 port 53114 Jan 23 19:57:22.759900 sshd-session[5115]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:22.768825 systemd[1]: sshd@12-10.230.78.134:22-68.220.241.50:53114.service: Deactivated successfully. Jan 23 19:57:22.772678 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 19:57:22.774462 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 23 19:57:22.777720 systemd-logind[1561]: Removed session 15. Jan 23 19:57:22.861508 systemd[1]: Started sshd@13-10.230.78.134:22-68.220.241.50:48650.service - OpenSSH per-connection server daemon (68.220.241.50:48650). Jan 23 19:57:23.446476 sshd[5128]: Accepted publickey for core from 68.220.241.50 port 48650 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:23.448515 sshd-session[5128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:23.457911 systemd-logind[1561]: New session 16 of user core. Jan 23 19:57:23.464144 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 19:57:23.944054 sshd[5131]: Connection closed by 68.220.241.50 port 48650 Jan 23 19:57:23.945112 sshd-session[5128]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:23.951955 systemd[1]: sshd@13-10.230.78.134:22-68.220.241.50:48650.service: Deactivated successfully. Jan 23 19:57:23.955593 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 19:57:23.956962 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 23 19:57:23.959128 systemd-logind[1561]: Removed session 16. Jan 23 19:57:28.067389 containerd[1583]: time="2026-01-23T19:57:28.067214140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:57:28.379182 containerd[1583]: time="2026-01-23T19:57:28.379097308Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:28.380887 containerd[1583]: time="2026-01-23T19:57:28.380744497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:57:28.380887 containerd[1583]: time="2026-01-23T19:57:28.380836002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:57:28.381287 kubelet[2892]: E0123 19:57:28.381178 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:57:28.383012 kubelet[2892]: E0123 19:57:28.381310 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:57:28.383012 kubelet[2892]: E0123 19:57:28.381509 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbhxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-66675fd984-tkh5k_calico-system(ef88c33e-7bcc-4e40-8e39-a5221bbcac5a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:28.383012 kubelet[2892]: E0123 19:57:28.382791 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:57:29.051990 systemd[1]: Started sshd@14-10.230.78.134:22-68.220.241.50:48660.service - OpenSSH per-connection server daemon (68.220.241.50:48660). Jan 23 19:57:29.640190 sshd[5143]: Accepted publickey for core from 68.220.241.50 port 48660 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:29.642622 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:29.649497 systemd-logind[1561]: New session 17 of user core. Jan 23 19:57:29.662024 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:57:30.169941 sshd[5151]: Connection closed by 68.220.241.50 port 48660 Jan 23 19:57:30.170240 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:30.179945 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:57:30.181381 systemd[1]: sshd@14-10.230.78.134:22-68.220.241.50:48660.service: Deactivated successfully. Jan 23 19:57:30.185301 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:57:30.189380 systemd-logind[1561]: Removed session 17. Jan 23 19:57:32.066738 containerd[1583]: time="2026-01-23T19:57:32.066653584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:57:32.394951 containerd[1583]: time="2026-01-23T19:57:32.394872715Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:32.396095 containerd[1583]: time="2026-01-23T19:57:32.396040333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:57:32.396195 containerd[1583]: time="2026-01-23T19:57:32.396163994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:57:32.396586 kubelet[2892]: E0123 19:57:32.396506 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:57:32.397276 kubelet[2892]: E0123 19:57:32.396598 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:57:32.397790 kubelet[2892]: E0123 19:57:32.397613 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:32.400955 containerd[1583]: time="2026-01-23T19:57:32.400653009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:57:32.711740 containerd[1583]: time="2026-01-23T19:57:32.711570437Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:57:32.713464 containerd[1583]: time="2026-01-23T19:57:32.713377669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:57:32.713464 containerd[1583]: time="2026-01-23T19:57:32.713427476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:57:32.713700 kubelet[2892]: E0123 19:57:32.713650 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:57:32.713776 kubelet[2892]: E0123 19:57:32.713715 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:57:32.713943 kubelet[2892]: E0123 19:57:32.713887 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:57:32.715187 kubelet[2892]: E0123 19:57:32.714988 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:57:33.069367 kubelet[2892]: E0123 19:57:33.066535 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:57:34.065983 kubelet[2892]: E0123 19:57:34.065877 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:57:35.067965 kubelet[2892]: E0123 19:57:35.067337 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:57:35.073525 kubelet[2892]: E0123 19:57:35.073405 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:57:35.275131 systemd[1]: Started sshd@15-10.230.78.134:22-68.220.241.50:52742.service - OpenSSH per-connection server daemon (68.220.241.50:52742). Jan 23 19:57:35.884757 sshd[5168]: Accepted publickey for core from 68.220.241.50 port 52742 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:35.886939 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:35.896546 systemd-logind[1561]: New session 18 of user core. Jan 23 19:57:35.903133 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:57:36.414183 sshd[5171]: Connection closed by 68.220.241.50 port 52742 Jan 23 19:57:36.415377 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:36.422048 systemd[1]: sshd@15-10.230.78.134:22-68.220.241.50:52742.service: Deactivated successfully. Jan 23 19:57:36.426177 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:57:36.428189 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:57:36.430245 systemd-logind[1561]: Removed session 18. Jan 23 19:57:40.066354 kubelet[2892]: E0123 19:57:40.066240 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:57:41.522501 systemd[1]: Started sshd@16-10.230.78.134:22-68.220.241.50:52748.service - OpenSSH per-connection server daemon (68.220.241.50:52748). Jan 23 19:57:42.112004 sshd[5212]: Accepted publickey for core from 68.220.241.50 port 52748 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:42.113461 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:42.121369 systemd-logind[1561]: New session 19 of user core. Jan 23 19:57:42.133707 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:57:42.611179 sshd[5215]: Connection closed by 68.220.241.50 port 52748 Jan 23 19:57:42.612518 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:42.618596 systemd[1]: sshd@16-10.230.78.134:22-68.220.241.50:52748.service: Deactivated successfully. Jan 23 19:57:42.624054 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:57:42.628069 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:57:42.631773 systemd-logind[1561]: Removed session 19. Jan 23 19:57:44.066530 kubelet[2892]: E0123 19:57:44.066458 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:57:47.069470 kubelet[2892]: E0123 19:57:47.068661 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:57:47.071221 kubelet[2892]: E0123 19:57:47.068800 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:57:47.079238 kubelet[2892]: E0123 19:57:47.070782 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:57:47.721576 systemd[1]: Started sshd@17-10.230.78.134:22-68.220.241.50:47814.service - OpenSSH per-connection server daemon (68.220.241.50:47814). Jan 23 19:57:48.306037 sshd[5226]: Accepted publickey for core from 68.220.241.50 port 47814 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:48.308408 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:48.316715 systemd-logind[1561]: New session 20 of user core. Jan 23 19:57:48.326059 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:57:48.802863 sshd[5229]: Connection closed by 68.220.241.50 port 47814 Jan 23 19:57:48.802072 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:48.810204 systemd[1]: sshd@17-10.230.78.134:22-68.220.241.50:47814.service: Deactivated successfully. Jan 23 19:57:48.815575 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:57:48.817224 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:57:48.819624 systemd-logind[1561]: Removed session 20. Jan 23 19:57:48.905268 systemd[1]: Started sshd@18-10.230.78.134:22-68.220.241.50:47820.service - OpenSSH per-connection server daemon (68.220.241.50:47820). Jan 23 19:57:49.069587 kubelet[2892]: E0123 19:57:49.069242 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:57:49.502240 sshd[5243]: Accepted publickey for core from 68.220.241.50 port 47820 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:49.505912 sshd-session[5243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:49.515147 systemd-logind[1561]: New session 21 of user core. Jan 23 19:57:49.521116 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:57:50.397562 sshd[5246]: Connection closed by 68.220.241.50 port 47820 Jan 23 19:57:50.399984 sshd-session[5243]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:50.410051 systemd[1]: sshd@18-10.230.78.134:22-68.220.241.50:47820.service: Deactivated successfully. Jan 23 19:57:50.413730 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:57:50.416020 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:57:50.418730 systemd-logind[1561]: Removed session 21. Jan 23 19:57:50.501204 systemd[1]: Started sshd@19-10.230.78.134:22-68.220.241.50:47830.service - OpenSSH per-connection server daemon (68.220.241.50:47830). Jan 23 19:57:51.156535 sshd[5256]: Accepted publickey for core from 68.220.241.50 port 47830 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:51.157505 sshd-session[5256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:51.166857 systemd-logind[1561]: New session 22 of user core. Jan 23 19:57:51.176022 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:57:52.512895 sshd[5259]: Connection closed by 68.220.241.50 port 47830 Jan 23 19:57:52.515761 sshd-session[5256]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:52.524980 systemd[1]: sshd@19-10.230.78.134:22-68.220.241.50:47830.service: Deactivated successfully. Jan 23 19:57:52.528507 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:57:52.530913 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:57:52.533505 systemd-logind[1561]: Removed session 22. Jan 23 19:57:52.614369 systemd[1]: Started sshd@20-10.230.78.134:22-68.220.241.50:48158.service - OpenSSH per-connection server daemon (68.220.241.50:48158). Jan 23 19:57:53.222957 sshd[5276]: Accepted publickey for core from 68.220.241.50 port 48158 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:53.224417 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:53.233796 systemd-logind[1561]: New session 23 of user core. Jan 23 19:57:53.239003 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:57:54.001342 sshd[5279]: Connection closed by 68.220.241.50 port 48158 Jan 23 19:57:54.002590 sshd-session[5276]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:54.009187 systemd[1]: sshd@20-10.230.78.134:22-68.220.241.50:48158.service: Deactivated successfully. Jan 23 19:57:54.014118 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:57:54.015767 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:57:54.018312 systemd-logind[1561]: Removed session 23. Jan 23 19:57:54.066830 kubelet[2892]: E0123 19:57:54.066252 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:57:54.104107 systemd[1]: Started sshd@21-10.230.78.134:22-68.220.241.50:48160.service - OpenSSH per-connection server daemon (68.220.241.50:48160). Jan 23 19:57:54.706561 sshd[5289]: Accepted publickey for core from 68.220.241.50 port 48160 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:57:54.709058 sshd-session[5289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:57:54.718002 systemd-logind[1561]: New session 24 of user core. Jan 23 19:57:54.726079 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:57:55.208426 sshd[5292]: Connection closed by 68.220.241.50 port 48160 Jan 23 19:57:55.209184 sshd-session[5289]: pam_unix(sshd:session): session closed for user core Jan 23 19:57:55.215903 systemd[1]: sshd@21-10.230.78.134:22-68.220.241.50:48160.service: Deactivated successfully. Jan 23 19:57:55.220977 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:57:55.225738 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:57:55.227546 systemd-logind[1561]: Removed session 24. Jan 23 19:57:57.070957 kubelet[2892]: E0123 19:57:57.069910 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:57:59.071668 kubelet[2892]: E0123 19:57:59.070933 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:58:00.325633 systemd[1]: Started sshd@22-10.230.78.134:22-68.220.241.50:48172.service - OpenSSH per-connection server daemon (68.220.241.50:48172). Jan 23 19:58:00.915017 sshd[5312]: Accepted publickey for core from 68.220.241.50 port 48172 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:58:00.917332 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:58:00.925542 systemd-logind[1561]: New session 25 of user core. Jan 23 19:58:00.932041 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:58:01.067852 containerd[1583]: time="2026-01-23T19:58:01.066530889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:58:01.396836 containerd[1583]: time="2026-01-23T19:58:01.396725080Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:01.399422 containerd[1583]: time="2026-01-23T19:58:01.399261284Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:58:01.399557 containerd[1583]: time="2026-01-23T19:58:01.399308634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:58:01.399999 kubelet[2892]: E0123 19:58:01.399920 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:58:01.400606 kubelet[2892]: E0123 19:58:01.400042 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:58:01.400606 kubelet[2892]: E0123 19:58:01.400254 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq2jm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-spxf6_calico-apiserver(ccd58232-7772-4e2c-865f-5e90b11eb5bb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:01.402021 kubelet[2892]: E0123 19:58:01.401968 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:58:01.473878 sshd[5315]: Connection closed by 68.220.241.50 port 48172 Jan 23 19:58:01.476135 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Jan 23 19:58:01.483446 systemd[1]: sshd@22-10.230.78.134:22-68.220.241.50:48172.service: Deactivated successfully. Jan 23 19:58:01.483803 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:58:01.486951 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:58:01.491823 systemd-logind[1561]: Removed session 25. Jan 23 19:58:02.065715 kubelet[2892]: E0123 19:58:02.065173 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620" Jan 23 19:58:04.066834 containerd[1583]: time="2026-01-23T19:58:04.066752181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 19:58:04.409074 containerd[1583]: time="2026-01-23T19:58:04.408986810Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:04.410730 containerd[1583]: time="2026-01-23T19:58:04.410197118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 19:58:04.410730 containerd[1583]: time="2026-01-23T19:58:04.410322219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 19:58:04.410872 kubelet[2892]: E0123 19:58:04.410569 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:58:04.410872 kubelet[2892]: E0123 19:58:04.410649 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 19:58:04.411520 kubelet[2892]: E0123 19:58:04.411273 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkxlm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fd877769c-tz6z7_calico-apiserver(4e1c1444-00fb-4816-822a-67edc8d93d18): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:04.412894 kubelet[2892]: E0123 19:58:04.412860 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-tz6z7" podUID="4e1c1444-00fb-4816-822a-67edc8d93d18" Jan 23 19:58:06.585084 systemd[1]: Started sshd@23-10.230.78.134:22-68.220.241.50:57188.service - OpenSSH per-connection server daemon (68.220.241.50:57188). Jan 23 19:58:07.221564 sshd[5326]: Accepted publickey for core from 68.220.241.50 port 57188 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:58:07.224485 sshd-session[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:58:07.234908 systemd-logind[1561]: New session 26 of user core. Jan 23 19:58:07.243134 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:58:07.864296 sshd[5329]: Connection closed by 68.220.241.50 port 57188 Jan 23 19:58:07.866457 sshd-session[5326]: pam_unix(sshd:session): session closed for user core Jan 23 19:58:07.872577 systemd[1]: sshd@23-10.230.78.134:22-68.220.241.50:57188.service: Deactivated successfully. Jan 23 19:58:07.877519 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:58:07.882886 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:58:07.884485 systemd-logind[1561]: Removed session 26. Jan 23 19:58:08.065232 kubelet[2892]: E0123 19:58:08.064514 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-66675fd984-tkh5k" podUID="ef88c33e-7bcc-4e40-8e39-a5221bbcac5a" Jan 23 19:58:09.072331 containerd[1583]: time="2026-01-23T19:58:09.072211365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 19:58:09.383835 containerd[1583]: time="2026-01-23T19:58:09.383750994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:09.388086 containerd[1583]: time="2026-01-23T19:58:09.387871702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 19:58:09.388086 containerd[1583]: time="2026-01-23T19:58:09.388029746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 19:58:09.388498 kubelet[2892]: E0123 19:58:09.388432 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:58:09.389292 kubelet[2892]: E0123 19:58:09.388521 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 19:58:09.389292 kubelet[2892]: E0123 19:58:09.388681 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8dd94f387d0441c7a2d0c496a36edcb6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:09.391691 containerd[1583]: time="2026-01-23T19:58:09.391383705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 19:58:09.704068 containerd[1583]: time="2026-01-23T19:58:09.703234933Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:09.705973 containerd[1583]: time="2026-01-23T19:58:09.705780091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 19:58:09.705973 containerd[1583]: time="2026-01-23T19:58:09.705918853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 19:58:09.706285 kubelet[2892]: E0123 19:58:09.706208 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:58:09.706384 kubelet[2892]: E0123 19:58:09.706308 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 19:58:09.706581 kubelet[2892]: E0123 19:58:09.706494 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wh5vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8c4bc59d-89rwz_calico-system(ab667679-fb0a-4ab2-a144-1015741c2ce8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:09.707710 kubelet[2892]: E0123 19:58:09.707643 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8c4bc59d-89rwz" podUID="ab667679-fb0a-4ab2-a144-1015741c2ce8" Jan 23 19:58:12.968390 systemd[1]: Started sshd@24-10.230.78.134:22-68.220.241.50:50428.service - OpenSSH per-connection server daemon (68.220.241.50:50428). Jan 23 19:58:13.597007 sshd[5366]: Accepted publickey for core from 68.220.241.50 port 50428 ssh2: RSA SHA256:GIV4DUfrC0TY+kgt5VYUWsdl0xQV85zfONJi5Rwbz2s Jan 23 19:58:13.599451 sshd-session[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:58:13.609549 systemd-logind[1561]: New session 27 of user core. Jan 23 19:58:13.616753 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:58:14.067849 containerd[1583]: time="2026-01-23T19:58:14.067407214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 19:58:14.069355 kubelet[2892]: E0123 19:58:14.067942 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fd877769c-spxf6" podUID="ccd58232-7772-4e2c-865f-5e90b11eb5bb" Jan 23 19:58:14.154647 sshd[5369]: Connection closed by 68.220.241.50 port 50428 Jan 23 19:58:14.155795 sshd-session[5366]: pam_unix(sshd:session): session closed for user core Jan 23 19:58:14.164346 systemd[1]: sshd@24-10.230.78.134:22-68.220.241.50:50428.service: Deactivated successfully. Jan 23 19:58:14.170544 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:58:14.176112 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:58:14.178112 systemd-logind[1561]: Removed session 27. Jan 23 19:58:14.403569 containerd[1583]: time="2026-01-23T19:58:14.402990590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:14.405226 containerd[1583]: time="2026-01-23T19:58:14.405034348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 19:58:14.405226 containerd[1583]: time="2026-01-23T19:58:14.405165274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 19:58:14.405496 kubelet[2892]: E0123 19:58:14.405379 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:58:14.405588 kubelet[2892]: E0123 19:58:14.405513 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 19:58:14.407011 kubelet[2892]: E0123 19:58:14.405769 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:14.408758 containerd[1583]: time="2026-01-23T19:58:14.408727344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 19:58:14.722854 containerd[1583]: time="2026-01-23T19:58:14.722644549Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:14.725179 containerd[1583]: time="2026-01-23T19:58:14.725074294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 19:58:14.725179 containerd[1583]: time="2026-01-23T19:58:14.725137265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 19:58:14.725588 kubelet[2892]: E0123 19:58:14.725514 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:58:14.725702 kubelet[2892]: E0123 19:58:14.725606 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 19:58:14.725969 kubelet[2892]: E0123 19:58:14.725827 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p9qqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-4gplb_calico-system(981744d6-418c-41e4-8d22-4fb530fbf1db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:14.727441 kubelet[2892]: E0123 19:58:14.727391 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-4gplb" podUID="981744d6-418c-41e4-8d22-4fb530fbf1db" Jan 23 19:58:16.065546 containerd[1583]: time="2026-01-23T19:58:16.064759818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:58:16.379174 containerd[1583]: time="2026-01-23T19:58:16.379099603Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:58:16.383371 containerd[1583]: time="2026-01-23T19:58:16.383325890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:58:16.385000 containerd[1583]: time="2026-01-23T19:58:16.383446170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:58:16.385118 kubelet[2892]: E0123 19:58:16.385030 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:58:16.386283 kubelet[2892]: E0123 19:58:16.385128 2892 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:58:16.386283 kubelet[2892]: E0123 19:58:16.385365 2892 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9bttl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-p9x6m_calico-system(b9e2459e-4d22-438e-9c19-f8662b6a9620): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:58:16.386673 kubelet[2892]: E0123 19:58:16.386575 2892 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-p9x6m" podUID="b9e2459e-4d22-438e-9c19-f8662b6a9620"