Jan 20 03:14:54.937683 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:14:52 -00 2026 Jan 20 03:14:54.937722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:14:54.937735 kernel: BIOS-provided physical RAM map: Jan 20 03:14:54.937744 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 03:14:54.937757 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 03:14:54.937766 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 03:14:54.937776 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 20 03:14:54.937785 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 20 03:14:54.937794 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 03:14:54.937803 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 03:14:54.937812 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 03:14:54.937821 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 03:14:54.937830 kernel: NX (Execute Disable) protection: active Jan 20 03:14:54.937854 kernel: APIC: Static calls initialized Jan 20 03:14:54.937865 kernel: SMBIOS 2.8 present. Jan 20 03:14:54.937875 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 20 03:14:54.937885 kernel: DMI: Memory slots populated: 1/1 Jan 20 03:14:54.937894 kernel: Hypervisor detected: KVM Jan 20 03:14:54.937916 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 20 03:14:54.937929 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 03:14:54.937939 kernel: kvm-clock: using sched offset of 5886439590 cycles Jan 20 03:14:54.937949 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 03:14:54.937959 kernel: tsc: Detected 2799.998 MHz processor Jan 20 03:14:54.937969 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 03:14:54.937980 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 03:14:54.937989 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 20 03:14:54.937999 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 03:14:54.938009 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 03:14:54.938022 kernel: Using GB pages for direct mapping Jan 20 03:14:54.938032 kernel: ACPI: Early table checksum verification disabled Jan 20 03:14:54.938042 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 20 03:14:54.938052 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938062 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938071 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938081 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 20 03:14:54.938091 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938101 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938114 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938124 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 03:14:54.938134 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 20 03:14:54.938148 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 20 03:14:54.938159 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 20 03:14:54.938169 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 20 03:14:54.938192 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 20 03:14:54.938203 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 20 03:14:54.938213 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 20 03:14:54.938235 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 20 03:14:54.938245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 20 03:14:54.938255 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 20 03:14:54.938265 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 20 03:14:54.938275 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 20 03:14:54.938288 kernel: Zone ranges: Jan 20 03:14:54.938306 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 03:14:54.938316 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 20 03:14:54.938326 kernel: Normal empty Jan 20 03:14:54.938336 kernel: Device empty Jan 20 03:14:54.938346 kernel: Movable zone start for each node Jan 20 03:14:54.938356 kernel: Early memory node ranges Jan 20 03:14:54.938366 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 03:14:54.938375 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 20 03:14:54.938388 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 20 03:14:54.938398 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 03:14:54.938408 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 03:14:54.938418 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 20 03:14:54.938440 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 03:14:54.938454 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 03:14:54.938474 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 03:14:54.938498 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 03:14:54.938508 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 03:14:54.938519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 03:14:54.938534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 03:14:54.938544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 03:14:54.938555 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 03:14:54.938565 kernel: TSC deadline timer available Jan 20 03:14:54.938576 kernel: CPU topo: Max. logical packages: 16 Jan 20 03:14:54.938586 kernel: CPU topo: Max. logical dies: 16 Jan 20 03:14:54.939640 kernel: CPU topo: Max. dies per package: 1 Jan 20 03:14:54.939658 kernel: CPU topo: Max. threads per core: 1 Jan 20 03:14:54.939670 kernel: CPU topo: Num. cores per package: 1 Jan 20 03:14:54.939688 kernel: CPU topo: Num. threads per package: 1 Jan 20 03:14:54.939700 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 20 03:14:54.939711 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 03:14:54.939723 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 03:14:54.939734 kernel: Booting paravirtualized kernel on KVM Jan 20 03:14:54.939746 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 03:14:54.939757 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 20 03:14:54.939781 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 20 03:14:54.939792 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 20 03:14:54.939807 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 20 03:14:54.939817 kernel: kvm-guest: PV spinlocks enabled Jan 20 03:14:54.939828 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 03:14:54.939841 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:14:54.939864 kernel: random: crng init done Jan 20 03:14:54.939875 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 03:14:54.939894 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 03:14:54.939904 kernel: Fallback order for Node 0: 0 Jan 20 03:14:54.939931 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 20 03:14:54.939943 kernel: Policy zone: DMA32 Jan 20 03:14:54.939956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 03:14:54.939968 kernel: software IO TLB: area num 16. Jan 20 03:14:54.939979 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 20 03:14:54.939990 kernel: Kernel/User page tables isolation: enabled Jan 20 03:14:54.940001 kernel: ftrace: allocating 40097 entries in 157 pages Jan 20 03:14:54.940029 kernel: ftrace: allocated 157 pages with 5 groups Jan 20 03:14:54.940039 kernel: Dynamic Preempt: voluntary Jan 20 03:14:54.940054 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 03:14:54.940078 kernel: rcu: RCU event tracing is enabled. Jan 20 03:14:54.940090 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 20 03:14:54.940101 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 03:14:54.940113 kernel: Rude variant of Tasks RCU enabled. Jan 20 03:14:54.940124 kernel: Tracing variant of Tasks RCU enabled. Jan 20 03:14:54.940136 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 03:14:54.940147 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 20 03:14:54.940158 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 03:14:54.940174 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 03:14:54.940185 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 03:14:54.940196 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 20 03:14:54.940208 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 03:14:54.940228 kernel: Console: colour VGA+ 80x25 Jan 20 03:14:54.940243 kernel: printk: legacy console [tty0] enabled Jan 20 03:14:54.940255 kernel: printk: legacy console [ttyS0] enabled Jan 20 03:14:54.940267 kernel: ACPI: Core revision 20240827 Jan 20 03:14:54.940279 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 03:14:54.940291 kernel: x2apic enabled Jan 20 03:14:54.940302 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 03:14:54.940326 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 20 03:14:54.940342 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 20 03:14:54.940354 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 03:14:54.940365 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 20 03:14:54.940389 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 20 03:14:54.940401 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 03:14:54.940416 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 03:14:54.940428 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 03:14:54.940440 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 20 03:14:54.940451 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 20 03:14:54.940472 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 20 03:14:54.940486 kernel: MDS: Mitigation: Clear CPU buffers Jan 20 03:14:54.940498 kernel: MMIO Stale Data: Unknown: No mitigations Jan 20 03:14:54.940509 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 20 03:14:54.940521 kernel: active return thunk: its_return_thunk Jan 20 03:14:54.940532 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 03:14:54.940544 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 03:14:54.940560 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 03:14:54.940572 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 03:14:54.941636 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 03:14:54.941656 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 20 03:14:54.941669 kernel: Freeing SMP alternatives memory: 32K Jan 20 03:14:54.941680 kernel: pid_max: default: 32768 minimum: 301 Jan 20 03:14:54.941692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 03:14:54.941704 kernel: landlock: Up and running. Jan 20 03:14:54.941716 kernel: SELinux: Initializing. Jan 20 03:14:54.941727 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 03:14:54.941751 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 03:14:54.941768 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 20 03:14:54.941780 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 20 03:14:54.941792 kernel: signal: max sigframe size: 1776 Jan 20 03:14:54.941815 kernel: rcu: Hierarchical SRCU implementation. Jan 20 03:14:54.941827 kernel: rcu: Max phase no-delay instances is 400. Jan 20 03:14:54.941838 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 20 03:14:54.941848 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 03:14:54.941859 kernel: smp: Bringing up secondary CPUs ... Jan 20 03:14:54.941882 kernel: smpboot: x86: Booting SMP configuration: Jan 20 03:14:54.941896 kernel: .... node #0, CPUs: #1 Jan 20 03:14:54.941906 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 03:14:54.941917 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 20 03:14:54.941928 kernel: Memory: 1887488K/2096616K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 203112K reserved, 0K cma-reserved) Jan 20 03:14:54.941944 kernel: devtmpfs: initialized Jan 20 03:14:54.941954 kernel: x86/mm: Memory block size: 128MB Jan 20 03:14:54.941965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 03:14:54.941975 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 20 03:14:54.941986 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 03:14:54.942000 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 03:14:54.942010 kernel: audit: initializing netlink subsys (disabled) Jan 20 03:14:54.942021 kernel: audit: type=2000 audit(1768878890.801:1): state=initialized audit_enabled=0 res=1 Jan 20 03:14:54.942032 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 03:14:54.942042 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 03:14:54.942053 kernel: cpuidle: using governor menu Jan 20 03:14:54.942063 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 03:14:54.942083 kernel: dca service started, version 1.12.1 Jan 20 03:14:54.942094 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 20 03:14:54.942108 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 03:14:54.942118 kernel: PCI: Using configuration type 1 for base access Jan 20 03:14:54.942129 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 03:14:54.942139 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 03:14:54.942150 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 03:14:54.942160 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 03:14:54.942171 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 03:14:54.942181 kernel: ACPI: Added _OSI(Module Device) Jan 20 03:14:54.942192 kernel: ACPI: Added _OSI(Processor Device) Jan 20 03:14:54.942205 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 03:14:54.942216 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 03:14:54.942227 kernel: ACPI: Interpreter enabled Jan 20 03:14:54.942237 kernel: ACPI: PM: (supports S0 S5) Jan 20 03:14:54.942247 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 03:14:54.942258 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 03:14:54.942269 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 03:14:54.942279 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 03:14:54.942290 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 03:14:54.943643 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 03:14:54.943868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 20 03:14:54.944032 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 20 03:14:54.944050 kernel: PCI host bridge to bus 0000:00 Jan 20 03:14:54.944236 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 03:14:54.944390 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 03:14:54.944558 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 03:14:54.945793 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 20 03:14:54.945942 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 03:14:54.946080 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 20 03:14:54.946231 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 03:14:54.946432 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 20 03:14:54.946665 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 20 03:14:54.946841 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 20 03:14:54.947007 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 20 03:14:54.947143 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 20 03:14:54.947294 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 03:14:54.947491 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.949697 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 20 03:14:54.949863 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 03:14:54.950029 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 20 03:14:54.950203 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 03:14:54.950376 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.950561 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 20 03:14:54.950737 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 03:14:54.950907 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 20 03:14:54.951059 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 03:14:54.951261 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.951415 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 20 03:14:54.954639 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 03:14:54.954812 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 20 03:14:54.954974 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 03:14:54.955175 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.955317 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 20 03:14:54.955486 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 03:14:54.955679 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 20 03:14:54.955845 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 03:14:54.956036 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.956204 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 20 03:14:54.956342 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 03:14:54.956520 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 20 03:14:54.958714 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 03:14:54.958921 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.959079 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 20 03:14:54.959239 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 03:14:54.959392 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 20 03:14:54.959579 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 03:14:54.959775 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.959937 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 20 03:14:54.960089 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 03:14:54.960252 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 20 03:14:54.960407 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 03:14:54.962482 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 03:14:54.962659 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 20 03:14:54.962837 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 03:14:54.962981 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 20 03:14:54.963147 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 03:14:54.963327 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 20 03:14:54.963492 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 20 03:14:54.963681 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 20 03:14:54.963847 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 20 03:14:54.964002 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 20 03:14:54.964184 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 20 03:14:54.964337 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 20 03:14:54.964501 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 20 03:14:54.970680 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 20 03:14:54.970880 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 20 03:14:54.971034 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 03:14:54.971220 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 20 03:14:54.971386 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 20 03:14:54.971574 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 20 03:14:54.971789 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 20 03:14:54.971969 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 20 03:14:54.972147 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 20 03:14:54.972304 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 20 03:14:54.972496 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 03:14:54.972681 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 20 03:14:54.972850 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 03:14:54.973056 kernel: pci_bus 0000:02: extended config space not accessible Jan 20 03:14:54.973228 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 20 03:14:54.973410 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 20 03:14:54.973627 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 03:14:54.973837 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 20 03:14:54.974003 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 20 03:14:54.974157 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 03:14:54.974345 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 20 03:14:54.974537 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 20 03:14:54.974710 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 03:14:54.974888 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 03:14:54.975039 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 03:14:54.975183 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 03:14:54.975336 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 03:14:54.975500 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 03:14:54.975528 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 03:14:54.975540 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 03:14:54.975557 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 03:14:54.975569 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 03:14:54.975580 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 03:14:54.976622 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 03:14:54.976635 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 03:14:54.976647 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 03:14:54.976658 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 03:14:54.976670 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 03:14:54.976682 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 03:14:54.976701 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 03:14:54.976713 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 03:14:54.976725 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 03:14:54.976737 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 03:14:54.976756 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 03:14:54.976768 kernel: iommu: Default domain type: Translated Jan 20 03:14:54.976780 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 03:14:54.976804 kernel: PCI: Using ACPI for IRQ routing Jan 20 03:14:54.976816 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 03:14:54.976833 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 03:14:54.976844 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 20 03:14:54.977033 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 03:14:54.977187 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 03:14:54.977370 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 03:14:54.977388 kernel: vgaarb: loaded Jan 20 03:14:54.977400 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 03:14:54.977415 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 03:14:54.977440 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 03:14:54.977458 kernel: pnp: PnP ACPI init Jan 20 03:14:54.981023 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 03:14:54.981045 kernel: pnp: PnP ACPI: found 5 devices Jan 20 03:14:54.981058 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 03:14:54.981076 kernel: NET: Registered PF_INET protocol family Jan 20 03:14:54.981087 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 03:14:54.981099 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 20 03:14:54.981111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 03:14:54.981130 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 03:14:54.981148 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 20 03:14:54.981159 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 20 03:14:54.981184 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 03:14:54.981201 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 03:14:54.981213 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 03:14:54.981225 kernel: NET: Registered PF_XDP protocol family Jan 20 03:14:54.981420 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 20 03:14:54.981621 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 20 03:14:54.981783 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 20 03:14:54.981960 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 20 03:14:54.982121 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 20 03:14:54.982288 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 20 03:14:54.982479 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 20 03:14:54.982652 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 20 03:14:54.982827 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 20 03:14:54.983005 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 20 03:14:54.983168 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 20 03:14:54.983344 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 20 03:14:54.983526 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 20 03:14:54.983701 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 20 03:14:54.983853 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 20 03:14:54.984004 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 20 03:14:54.984161 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 03:14:54.984343 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 20 03:14:54.984507 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 03:14:54.985705 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 20 03:14:54.985900 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 20 03:14:54.986070 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 03:14:54.986255 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 03:14:54.986431 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 20 03:14:54.986622 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 20 03:14:54.986777 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 03:14:54.986960 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 03:14:54.987112 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 20 03:14:54.987264 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 20 03:14:54.987446 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 03:14:54.989636 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 03:14:54.989810 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 20 03:14:54.989985 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 20 03:14:54.990139 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 03:14:54.990323 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 03:14:54.990499 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 20 03:14:54.990696 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 20 03:14:54.990860 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 03:14:54.991017 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 03:14:54.991169 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 20 03:14:54.991363 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 20 03:14:54.991540 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 03:14:54.993730 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 03:14:54.993898 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 20 03:14:54.994055 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 20 03:14:54.994228 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 03:14:54.994396 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 03:14:54.994612 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 20 03:14:54.994769 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 20 03:14:54.994922 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 03:14:54.995089 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 03:14:54.995226 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 03:14:54.995380 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 03:14:54.995541 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 20 03:14:54.997856 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 03:14:54.998023 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 20 03:14:54.998223 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 20 03:14:54.998401 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 20 03:14:54.998572 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 03:14:54.998749 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 20 03:14:54.998962 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 20 03:14:54.999121 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 20 03:14:54.999296 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 03:14:54.999481 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 20 03:14:55.001526 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 20 03:14:55.001704 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 03:14:55.001894 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 20 03:14:55.002051 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 20 03:14:55.002196 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 03:14:55.002367 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 20 03:14:55.002529 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 20 03:14:55.002701 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 03:14:55.002877 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 20 03:14:55.003045 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 20 03:14:55.003208 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 03:14:55.003388 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 20 03:14:55.003547 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 20 03:14:55.003720 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 03:14:55.003888 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 20 03:14:55.004033 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 20 03:14:55.004175 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 03:14:55.004195 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 03:14:55.004215 kernel: PCI: CLS 0 bytes, default 64 Jan 20 03:14:55.004229 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 20 03:14:55.004241 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 20 03:14:55.004254 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 20 03:14:55.004267 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 20 03:14:55.004280 kernel: Initialise system trusted keyrings Jan 20 03:14:55.004292 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 20 03:14:55.004305 kernel: Key type asymmetric registered Jan 20 03:14:55.004317 kernel: Asymmetric key parser 'x509' registered Jan 20 03:14:55.004334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 20 03:14:55.004347 kernel: io scheduler mq-deadline registered Jan 20 03:14:55.004359 kernel: io scheduler kyber registered Jan 20 03:14:55.004372 kernel: io scheduler bfq registered Jan 20 03:14:55.004558 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 20 03:14:55.004737 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 20 03:14:55.004903 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.005082 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 20 03:14:55.005250 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 20 03:14:55.005411 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.005682 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 20 03:14:55.005857 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 20 03:14:55.006023 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.006187 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 20 03:14:55.006361 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 20 03:14:55.006550 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.006724 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 20 03:14:55.006890 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 20 03:14:55.007061 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.007225 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 20 03:14:55.007396 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 20 03:14:55.007605 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.007764 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 20 03:14:55.007938 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 20 03:14:55.008095 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.008257 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 20 03:14:55.008438 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 20 03:14:55.008631 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 03:14:55.008651 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 03:14:55.008665 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 03:14:55.008678 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 03:14:55.008690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 03:14:55.008703 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 03:14:55.008722 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 03:14:55.008748 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 03:14:55.008760 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 03:14:55.008930 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 20 03:14:55.008950 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 03:14:55.009110 kernel: rtc_cmos 00:03: registered as rtc0 Jan 20 03:14:55.009265 kernel: rtc_cmos 00:03: setting system clock to 2026-01-20T03:14:54 UTC (1768878894) Jan 20 03:14:55.009429 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 20 03:14:55.009476 kernel: intel_pstate: CPU model not supported Jan 20 03:14:55.009490 kernel: NET: Registered PF_INET6 protocol family Jan 20 03:14:55.009503 kernel: Segment Routing with IPv6 Jan 20 03:14:55.009516 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 03:14:55.009528 kernel: NET: Registered PF_PACKET protocol family Jan 20 03:14:55.009541 kernel: Key type dns_resolver registered Jan 20 03:14:55.009553 kernel: IPI shorthand broadcast: enabled Jan 20 03:14:55.009565 kernel: sched_clock: Marking stable (3400003631, 218546381)->(3744397435, -125847423) Jan 20 03:14:55.009578 kernel: registered taskstats version 1 Jan 20 03:14:55.009616 kernel: Loading compiled-in X.509 certificates Jan 20 03:14:55.009629 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 5eaf2083485884e476a8ac33c4b07b82eff139e9' Jan 20 03:14:55.009642 kernel: Demotion targets for Node 0: null Jan 20 03:14:55.009658 kernel: Key type .fscrypt registered Jan 20 03:14:55.009670 kernel: Key type fscrypt-provisioning registered Jan 20 03:14:55.009683 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 03:14:55.009695 kernel: ima: Allocated hash algorithm: sha1 Jan 20 03:14:55.009708 kernel: ima: No architecture policies found Jan 20 03:14:55.009720 kernel: clk: Disabling unused clocks Jan 20 03:14:55.009737 kernel: Warning: unable to open an initial console. Jan 20 03:14:55.009750 kernel: Freeing unused kernel image (initmem) memory: 46204K Jan 20 03:14:55.009762 kernel: Write protecting the kernel read-only data: 40960k Jan 20 03:14:55.009775 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 20 03:14:55.009787 kernel: Run /init as init process Jan 20 03:14:55.009800 kernel: with arguments: Jan 20 03:14:55.009812 kernel: /init Jan 20 03:14:55.009824 kernel: with environment: Jan 20 03:14:55.009836 kernel: HOME=/ Jan 20 03:14:55.009852 kernel: TERM=linux Jan 20 03:14:55.009874 systemd[1]: Successfully made /usr/ read-only. Jan 20 03:14:55.009891 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:14:55.009905 systemd[1]: Detected virtualization kvm. Jan 20 03:14:55.009918 systemd[1]: Detected architecture x86-64. Jan 20 03:14:55.009931 systemd[1]: Running in initrd. Jan 20 03:14:55.009944 systemd[1]: No hostname configured, using default hostname. Jan 20 03:14:55.009962 systemd[1]: Hostname set to . Jan 20 03:14:55.009976 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:14:55.009989 systemd[1]: Queued start job for default target initrd.target. Jan 20 03:14:55.010002 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:14:55.010027 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:14:55.010041 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 03:14:55.010054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:14:55.010079 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 03:14:55.010098 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 03:14:55.010112 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 03:14:55.010125 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 03:14:55.010138 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:14:55.010151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:14:55.010165 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:14:55.010184 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:14:55.010201 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:14:55.010215 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:14:55.010228 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:14:55.010241 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:14:55.010257 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 03:14:55.010270 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 03:14:55.010283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:14:55.010309 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:14:55.010322 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:14:55.010339 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:14:55.010351 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 03:14:55.010364 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:14:55.010377 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 03:14:55.010403 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 03:14:55.010417 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 03:14:55.010430 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:14:55.010443 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:14:55.010460 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:14:55.010484 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 03:14:55.010498 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:14:55.010511 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 03:14:55.010572 systemd-journald[210]: Collecting audit messages is disabled. Jan 20 03:14:55.010633 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 03:14:55.010648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 03:14:55.010662 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 03:14:55.010675 kernel: Bridge firewalling registered Jan 20 03:14:55.010693 systemd-journald[210]: Journal started Jan 20 03:14:55.010723 systemd-journald[210]: Runtime Journal (/run/log/journal/7dded5f0822c447e8682c0d15a8d9382) is 4.7M, max 37.8M, 33.1M free. Jan 20 03:14:54.939259 systemd-modules-load[212]: Inserted module 'overlay' Jan 20 03:14:55.034398 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:14:55.000768 systemd-modules-load[212]: Inserted module 'br_netfilter' Jan 20 03:14:55.035492 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:14:55.036880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:14:55.040824 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 03:14:55.042754 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:14:55.045442 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:14:55.049719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:14:55.071631 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:14:55.073507 systemd-tmpfiles[228]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 03:14:55.080208 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:14:55.082057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:14:55.088684 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:14:55.097416 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:14:55.103786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 03:14:55.133188 dracut-cmdline[251]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f1266f495940b87d8762edac6a2036329f4c1218cb3943862a5de7e7a0c377ea Jan 20 03:14:55.148725 systemd-resolved[247]: Positive Trust Anchors: Jan 20 03:14:55.149678 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:14:55.149720 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:14:55.157755 systemd-resolved[247]: Defaulting to hostname 'linux'. Jan 20 03:14:55.160785 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:14:55.161875 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:14:55.249612 kernel: SCSI subsystem initialized Jan 20 03:14:55.261606 kernel: Loading iSCSI transport class v2.0-870. Jan 20 03:14:55.274633 kernel: iscsi: registered transport (tcp) Jan 20 03:14:55.300633 kernel: iscsi: registered transport (qla4xxx) Jan 20 03:14:55.300675 kernel: QLogic iSCSI HBA Driver Jan 20 03:14:55.327605 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:14:55.354325 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:14:55.357363 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:14:55.418541 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 03:14:55.421303 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 03:14:55.484621 kernel: raid6: sse2x4 gen() 12552 MB/s Jan 20 03:14:55.502624 kernel: raid6: sse2x2 gen() 8573 MB/s Jan 20 03:14:55.521267 kernel: raid6: sse2x1 gen() 8554 MB/s Jan 20 03:14:55.521304 kernel: raid6: using algorithm sse2x4 gen() 12552 MB/s Jan 20 03:14:55.540217 kernel: raid6: .... xor() 7910 MB/s, rmw enabled Jan 20 03:14:55.540273 kernel: raid6: using ssse3x2 recovery algorithm Jan 20 03:14:55.565616 kernel: xor: automatically using best checksumming function avx Jan 20 03:14:55.755628 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 03:14:55.764631 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:14:55.768768 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:14:55.799297 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 20 03:14:55.808352 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:14:55.810251 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 03:14:55.836614 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 20 03:14:55.868036 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:14:55.870642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:14:55.987739 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:14:55.992856 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 03:14:56.098609 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 20 03:14:56.115612 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 20 03:14:56.124634 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 03:14:56.151665 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 03:14:56.151708 kernel: GPT:17805311 != 125829119 Jan 20 03:14:56.151733 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 03:14:56.151757 kernel: GPT:17805311 != 125829119 Jan 20 03:14:56.151773 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 03:14:56.151789 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:14:56.167621 kernel: libata version 3.00 loaded. Jan 20 03:14:56.178514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:14:56.187765 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 20 03:14:56.187805 kernel: AES CTR mode by8 optimization enabled Jan 20 03:14:56.178724 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:14:56.184242 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:14:56.192665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:14:56.195160 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:14:56.197600 kernel: ACPI: bus type USB registered Jan 20 03:14:56.199894 kernel: usbcore: registered new interface driver usbfs Jan 20 03:14:56.202838 kernel: usbcore: registered new interface driver hub Jan 20 03:14:56.202872 kernel: usbcore: registered new device driver usb Jan 20 03:14:56.206609 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 03:14:56.206858 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 03:14:56.216616 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 20 03:14:56.216844 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 20 03:14:56.217079 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 03:14:56.256607 kernel: scsi host0: ahci Jan 20 03:14:56.260951 kernel: scsi host1: ahci Jan 20 03:14:56.261615 kernel: scsi host2: ahci Jan 20 03:14:56.263618 kernel: scsi host3: ahci Jan 20 03:14:56.264602 kernel: scsi host4: ahci Jan 20 03:14:56.267379 kernel: scsi host5: ahci Jan 20 03:14:56.267611 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 lpm-pol 1 Jan 20 03:14:56.267634 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 lpm-pol 1 Jan 20 03:14:56.267650 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 lpm-pol 1 Jan 20 03:14:56.267673 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 lpm-pol 1 Jan 20 03:14:56.267690 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 lpm-pol 1 Jan 20 03:14:56.267705 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 lpm-pol 1 Jan 20 03:14:56.301811 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 03:14:56.369554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 03:14:56.372922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:14:56.393538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 03:14:56.413143 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 03:14:56.425141 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:14:56.427124 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 03:14:56.456759 disk-uuid[606]: Primary Header is updated. Jan 20 03:14:56.456759 disk-uuid[606]: Secondary Entries is updated. Jan 20 03:14:56.456759 disk-uuid[606]: Secondary Header is updated. Jan 20 03:14:56.463616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:14:56.471619 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:14:56.578626 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 03:14:56.578712 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 03:14:56.580837 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 20 03:14:56.582613 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 03:14:56.584617 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 03:14:56.584648 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 03:14:56.610617 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 20 03:14:56.621604 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 20 03:14:56.628629 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 20 03:14:56.633613 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 20 03:14:56.638607 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 20 03:14:56.642603 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 20 03:14:56.642837 kernel: hub 1-0:1.0: USB hub found Jan 20 03:14:56.643079 kernel: hub 1-0:1.0: 4 ports detected Jan 20 03:14:56.647044 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 20 03:14:56.647294 kernel: hub 2-0:1.0: USB hub found Jan 20 03:14:56.647508 kernel: hub 2-0:1.0: 4 ports detected Jan 20 03:14:56.676829 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 03:14:56.689047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:14:56.690795 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:14:56.692374 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:14:56.695507 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 03:14:56.721021 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:14:56.884716 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 20 03:14:57.026731 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 03:14:57.032615 kernel: usbcore: registered new interface driver usbhid Jan 20 03:14:57.032653 kernel: usbhid: USB HID core driver Jan 20 03:14:57.041186 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 20 03:14:57.041225 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 20 03:14:57.475624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 03:14:57.477906 disk-uuid[608]: The operation has completed successfully. Jan 20 03:14:57.531548 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 03:14:57.531747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 03:14:57.579094 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 03:14:57.594935 sh[635]: Success Jan 20 03:14:57.619986 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 03:14:57.620031 kernel: device-mapper: uevent: version 1.0.3 Jan 20 03:14:57.622630 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 03:14:57.634617 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Jan 20 03:14:57.682951 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 03:14:57.689695 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 03:14:57.704495 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 03:14:57.718653 kernel: BTRFS: device fsid 1cad4abe-82cb-4052-9906-9dfb1f3e3340 devid 1 transid 44 /dev/mapper/usr (253:0) scanned by mount (647) Jan 20 03:14:57.723088 kernel: BTRFS info (device dm-0): first mount of filesystem 1cad4abe-82cb-4052-9906-9dfb1f3e3340 Jan 20 03:14:57.723122 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:14:57.736290 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 03:14:57.736329 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 03:14:57.740518 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 03:14:57.742027 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:14:57.742950 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 03:14:57.744111 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 03:14:57.748773 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 03:14:57.781639 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (680) Jan 20 03:14:57.784752 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:14:57.787601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:14:57.792962 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:14:57.793004 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:14:57.799641 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:14:57.800440 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 03:14:57.804768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 03:14:57.910106 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:14:57.913134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:14:57.973253 systemd-networkd[817]: lo: Link UP Jan 20 03:14:57.974971 systemd-networkd[817]: lo: Gained carrier Jan 20 03:14:57.978242 systemd-networkd[817]: Enumeration completed Jan 20 03:14:57.979043 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:14:57.980246 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:14:57.980251 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:14:57.981911 systemd[1]: Reached target network.target - Network. Jan 20 03:14:57.985171 systemd-networkd[817]: eth0: Link UP Jan 20 03:14:57.985526 systemd-networkd[817]: eth0: Gained carrier Jan 20 03:14:57.985549 systemd-networkd[817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:14:57.999679 systemd-networkd[817]: eth0: DHCPv4 address 10.230.49.118/30, gateway 10.230.49.117 acquired from 10.230.49.117 Jan 20 03:14:58.018838 ignition[737]: Ignition 2.22.0 Jan 20 03:14:58.018871 ignition[737]: Stage: fetch-offline Jan 20 03:14:58.018958 ignition[737]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:14:58.018976 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:14:58.022243 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:14:58.019182 ignition[737]: parsed url from cmdline: "" Jan 20 03:14:58.024804 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 03:14:58.019189 ignition[737]: no config URL provided Jan 20 03:14:58.019214 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 03:14:58.019229 ignition[737]: no config at "/usr/lib/ignition/user.ign" Jan 20 03:14:58.019245 ignition[737]: failed to fetch config: resource requires networking Jan 20 03:14:58.019746 ignition[737]: Ignition finished successfully Jan 20 03:14:58.060935 ignition[826]: Ignition 2.22.0 Jan 20 03:14:58.060974 ignition[826]: Stage: fetch Jan 20 03:14:58.061182 ignition[826]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:14:58.061199 ignition[826]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:14:58.061345 ignition[826]: parsed url from cmdline: "" Jan 20 03:14:58.061363 ignition[826]: no config URL provided Jan 20 03:14:58.061372 ignition[826]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 03:14:58.061395 ignition[826]: no config at "/usr/lib/ignition/user.ign" Jan 20 03:14:58.061564 ignition[826]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 20 03:14:58.062159 ignition[826]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 20 03:14:58.062225 ignition[826]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 20 03:14:58.081972 ignition[826]: GET result: OK Jan 20 03:14:58.082821 ignition[826]: parsing config with SHA512: 54012ba507e4fa2e86d93f53edf84eb7ec7a225be959f22f3db604a40a4059ef1e5834d3d2ab57360ee57d44a466ec21a4b0a72e313650de9ebf61dd623d6613 Jan 20 03:14:58.087964 unknown[826]: fetched base config from "system" Jan 20 03:14:58.087980 unknown[826]: fetched base config from "system" Jan 20 03:14:58.088358 ignition[826]: fetch: fetch complete Jan 20 03:14:58.087988 unknown[826]: fetched user config from "openstack" Jan 20 03:14:58.088365 ignition[826]: fetch: fetch passed Jan 20 03:14:58.091185 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 03:14:58.088445 ignition[826]: Ignition finished successfully Jan 20 03:14:58.093775 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 03:14:58.136847 ignition[832]: Ignition 2.22.0 Jan 20 03:14:58.137959 ignition[832]: Stage: kargs Jan 20 03:14:58.138172 ignition[832]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:14:58.138189 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:14:58.141222 ignition[832]: kargs: kargs passed Jan 20 03:14:58.141297 ignition[832]: Ignition finished successfully Jan 20 03:14:58.143512 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 03:14:58.145776 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 03:14:58.186122 ignition[838]: Ignition 2.22.0 Jan 20 03:14:58.186155 ignition[838]: Stage: disks Jan 20 03:14:58.186334 ignition[838]: no configs at "/usr/lib/ignition/base.d" Jan 20 03:14:58.186350 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:14:58.188440 ignition[838]: disks: disks passed Jan 20 03:14:58.190843 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 03:14:58.188564 ignition[838]: Ignition finished successfully Jan 20 03:14:58.194285 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 03:14:58.195072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 03:14:58.196789 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:14:58.198548 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:14:58.200032 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:14:58.202945 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 03:14:58.245248 systemd-fsck[846]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 20 03:14:58.249415 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 03:14:58.254136 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 03:14:58.388614 kernel: EXT4-fs (vda9): mounted filesystem d87587c2-84ee-4a64-a55e-c6773c94f548 r/w with ordered data mode. Quota mode: none. Jan 20 03:14:58.390104 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 03:14:58.392346 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 03:14:58.395866 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:14:58.398685 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 03:14:58.399841 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 03:14:58.402760 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 20 03:14:58.403680 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 03:14:58.403725 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:14:58.419910 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 03:14:58.424188 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 03:14:58.432637 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (854) Jan 20 03:14:58.441454 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:14:58.441513 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:14:58.448610 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:14:58.448666 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:14:58.455573 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:14:58.492627 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:14:58.519285 initrd-setup-root[882]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 03:14:58.526185 initrd-setup-root[889]: cut: /sysroot/etc/group: No such file or directory Jan 20 03:14:58.531834 initrd-setup-root[896]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 03:14:58.538511 initrd-setup-root[903]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 03:14:58.644961 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 03:14:58.648634 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 03:14:58.650168 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 03:14:58.674607 kernel: BTRFS info (device vda6): last unmount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:14:58.694659 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 03:14:58.713657 ignition[972]: INFO : Ignition 2.22.0 Jan 20 03:14:58.715681 ignition[972]: INFO : Stage: mount Jan 20 03:14:58.715681 ignition[972]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:14:58.715681 ignition[972]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:14:58.719230 ignition[972]: INFO : mount: mount passed Jan 20 03:14:58.719230 ignition[972]: INFO : Ignition finished successfully Jan 20 03:14:58.715802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 03:14:58.719478 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 03:14:59.090919 systemd-networkd[817]: eth0: Gained IPv6LL Jan 20 03:14:59.525613 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:00.101802 systemd-networkd[817]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c5d:24:19ff:fee6:3176/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c5d:24:19ff:fee6:3176/64 assigned by NDisc. Jan 20 03:15:00.101815 systemd-networkd[817]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 20 03:15:01.534611 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:05.540632 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:05.550007 coreos-metadata[856]: Jan 20 03:15:05.549 WARN failed to locate config-drive, using the metadata service API instead Jan 20 03:15:05.572217 coreos-metadata[856]: Jan 20 03:15:05.572 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 03:15:05.584349 coreos-metadata[856]: Jan 20 03:15:05.584 INFO Fetch successful Jan 20 03:15:05.585423 coreos-metadata[856]: Jan 20 03:15:05.585 INFO wrote hostname srv-jqch3.gb1.brightbox.com to /sysroot/etc/hostname Jan 20 03:15:05.587633 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 20 03:15:05.587792 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 20 03:15:05.591830 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 03:15:05.619511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 03:15:05.656606 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (987) Jan 20 03:15:05.656657 kernel: BTRFS info (device vda6): first mount of filesystem 084bfd60-dd5e-4810-8f7b-6e24dbaec2b2 Jan 20 03:15:05.659607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 03:15:05.664370 kernel: BTRFS info (device vda6): turning on async discard Jan 20 03:15:05.664402 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 03:15:05.667347 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 03:15:05.705763 ignition[1004]: INFO : Ignition 2.22.0 Jan 20 03:15:05.705763 ignition[1004]: INFO : Stage: files Jan 20 03:15:05.707740 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:15:05.707740 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:15:05.707740 ignition[1004]: DEBUG : files: compiled without relabeling support, skipping Jan 20 03:15:05.712800 ignition[1004]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 03:15:05.712800 ignition[1004]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 03:15:05.724157 ignition[1004]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 03:15:05.724157 ignition[1004]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 03:15:05.724157 ignition[1004]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 03:15:05.724120 unknown[1004]: wrote ssh authorized keys file for user: core Jan 20 03:15:05.728773 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 03:15:05.728773 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 20 03:15:05.923943 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 03:15:06.156895 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 03:15:06.158495 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 03:15:06.167257 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:15:06.167257 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 03:15:06.167257 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:15:06.167257 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:15:06.167257 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:15:06.167257 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 20 03:15:06.469784 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 03:15:07.584936 ignition[1004]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 20 03:15:07.584936 ignition[1004]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 03:15:07.588549 ignition[1004]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 03:15:07.588549 ignition[1004]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 03:15:07.588549 ignition[1004]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 03:15:07.588549 ignition[1004]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 03:15:07.594008 ignition[1004]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 03:15:07.594008 ignition[1004]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:15:07.594008 ignition[1004]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 03:15:07.594008 ignition[1004]: INFO : files: files passed Jan 20 03:15:07.594008 ignition[1004]: INFO : Ignition finished successfully Jan 20 03:15:07.593703 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 03:15:07.597776 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 03:15:07.601760 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 03:15:07.612465 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 03:15:07.612996 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 03:15:07.622517 initrd-setup-root-after-ignition[1034]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:15:07.624262 initrd-setup-root-after-ignition[1038]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:15:07.625505 initrd-setup-root-after-ignition[1034]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 03:15:07.625938 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:15:07.627468 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 03:15:07.629774 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 03:15:07.681555 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 03:15:07.681769 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 03:15:07.683423 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 03:15:07.684757 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 03:15:07.686303 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 03:15:07.687336 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 03:15:07.729574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:15:07.732337 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 03:15:07.761737 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:15:07.763513 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:15:07.765353 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 03:15:07.766847 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 03:15:07.767068 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 03:15:07.768674 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 03:15:07.769738 systemd[1]: Stopped target basic.target - Basic System. Jan 20 03:15:07.771294 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 03:15:07.772730 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 03:15:07.774316 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 03:15:07.775944 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 03:15:07.777410 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 03:15:07.778839 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 03:15:07.780575 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 03:15:07.781897 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 03:15:07.783444 systemd[1]: Stopped target swap.target - Swaps. Jan 20 03:15:07.784733 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 03:15:07.784898 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 03:15:07.786719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:15:07.787699 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:15:07.788948 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 03:15:07.789374 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:15:07.790533 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 03:15:07.790791 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 03:15:07.792553 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 03:15:07.792746 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 03:15:07.794367 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 03:15:07.794599 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 03:15:07.797677 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 03:15:07.803290 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 03:15:07.803467 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:15:07.811413 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 03:15:07.813689 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 03:15:07.813946 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:15:07.815397 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 03:15:07.815559 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 03:15:07.826795 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 03:15:07.826913 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 03:15:07.848402 ignition[1058]: INFO : Ignition 2.22.0 Jan 20 03:15:07.850713 ignition[1058]: INFO : Stage: umount Jan 20 03:15:07.850713 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 03:15:07.850713 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 03:15:07.850507 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 03:15:07.855886 ignition[1058]: INFO : umount: umount passed Jan 20 03:15:07.855886 ignition[1058]: INFO : Ignition finished successfully Jan 20 03:15:07.855423 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 03:15:07.855619 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 03:15:07.856989 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 03:15:07.857095 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 03:15:07.857876 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 03:15:07.857938 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 03:15:07.859174 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 03:15:07.859236 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 03:15:07.860494 systemd[1]: Stopped target network.target - Network. Jan 20 03:15:07.861687 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 03:15:07.861747 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 03:15:07.863031 systemd[1]: Stopped target paths.target - Path Units. Jan 20 03:15:07.864237 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 03:15:07.866781 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:15:07.867902 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 03:15:07.869166 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 03:15:07.870635 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 03:15:07.870701 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 03:15:07.872014 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 03:15:07.872089 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 03:15:07.873262 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 03:15:07.873334 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 03:15:07.874713 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 03:15:07.874770 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 03:15:07.876401 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 03:15:07.878349 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 03:15:07.880710 systemd-networkd[817]: eth0: DHCPv6 lease lost Jan 20 03:15:07.887483 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 03:15:07.887700 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 03:15:07.890651 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 20 03:15:07.891016 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 03:15:07.891226 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 03:15:07.894880 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 20 03:15:07.895603 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 03:15:07.897270 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 03:15:07.897340 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:15:07.901721 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 03:15:07.902746 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 03:15:07.902809 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 03:15:07.904452 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 03:15:07.904532 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:15:07.907200 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 03:15:07.907263 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 03:15:07.909090 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 03:15:07.909179 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:15:07.911357 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:15:07.913551 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 20 03:15:07.915666 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:15:07.925412 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 03:15:07.927103 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:15:07.928416 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 03:15:07.928490 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 03:15:07.930040 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 03:15:07.930090 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:15:07.934153 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 03:15:07.934226 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 03:15:07.936405 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 03:15:07.936487 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 03:15:07.937839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 03:15:07.937918 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 03:15:07.940275 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 03:15:07.942518 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 03:15:07.942619 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:15:07.945635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 03:15:07.945699 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:15:07.948434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 03:15:07.948512 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:15:07.951248 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 20 03:15:07.951323 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 20 03:15:07.951388 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 20 03:15:07.951894 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 03:15:07.952025 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 03:15:07.962102 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 03:15:07.962261 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 03:15:07.981893 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 03:15:07.982073 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 03:15:07.984027 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 03:15:07.985077 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 03:15:07.985190 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 03:15:07.988640 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 03:15:08.004738 systemd[1]: Switching root. Jan 20 03:15:08.044880 systemd-journald[210]: Journal stopped Jan 20 03:15:09.516438 systemd-journald[210]: Received SIGTERM from PID 1 (systemd). Jan 20 03:15:09.516539 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 03:15:09.516596 kernel: SELinux: policy capability open_perms=1 Jan 20 03:15:09.516630 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 03:15:09.516656 kernel: SELinux: policy capability always_check_network=0 Jan 20 03:15:09.516679 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 03:15:09.516710 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 03:15:09.516734 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 03:15:09.516752 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 03:15:09.516769 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 03:15:09.516788 kernel: audit: type=1403 audit(1768878908.333:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 03:15:09.516807 systemd[1]: Successfully loaded SELinux policy in 72.884ms. Jan 20 03:15:09.516853 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.135ms. Jan 20 03:15:09.516878 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 03:15:09.516909 systemd[1]: Detected virtualization kvm. Jan 20 03:15:09.516929 systemd[1]: Detected architecture x86-64. Jan 20 03:15:09.516947 systemd[1]: Detected first boot. Jan 20 03:15:09.516974 systemd[1]: Hostname set to . Jan 20 03:15:09.517004 systemd[1]: Initializing machine ID from VM UUID. Jan 20 03:15:09.517022 zram_generator::config[1102]: No configuration found. Jan 20 03:15:09.517051 kernel: Guest personality initialized and is inactive Jan 20 03:15:09.517070 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 20 03:15:09.517106 kernel: Initialized host personality Jan 20 03:15:09.517127 kernel: NET: Registered PF_VSOCK protocol family Jan 20 03:15:09.517158 systemd[1]: Populated /etc with preset unit settings. Jan 20 03:15:09.517180 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 20 03:15:09.517199 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 03:15:09.517225 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 03:15:09.517244 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 03:15:09.517263 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 03:15:09.517283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 03:15:09.517312 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 03:15:09.517332 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 03:15:09.517356 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 03:15:09.517376 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 03:15:09.517396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 03:15:09.517415 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 03:15:09.517450 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 03:15:09.517481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 03:15:09.517502 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 03:15:09.517522 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 03:15:09.517541 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 03:15:09.517570 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 03:15:09.519648 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 03:15:09.519678 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 03:15:09.519698 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 03:15:09.519716 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 03:15:09.519735 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 03:15:09.519753 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 03:15:09.519772 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 03:15:09.519790 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 03:15:09.519825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 03:15:09.519846 systemd[1]: Reached target slices.target - Slice Units. Jan 20 03:15:09.519865 systemd[1]: Reached target swap.target - Swaps. Jan 20 03:15:09.519883 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 03:15:09.519902 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 03:15:09.519921 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 03:15:09.519940 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 03:15:09.519959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 03:15:09.519978 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 03:15:09.519996 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 03:15:09.520027 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 03:15:09.520047 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 03:15:09.520065 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 03:15:09.520085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:15:09.520115 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 03:15:09.520136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 03:15:09.520155 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 03:15:09.520174 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 03:15:09.520205 systemd[1]: Reached target machines.target - Containers. Jan 20 03:15:09.520225 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 03:15:09.520244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:15:09.520263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 03:15:09.520295 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 03:15:09.520315 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:15:09.520339 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:15:09.520360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:15:09.520383 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 03:15:09.521630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:15:09.521658 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 03:15:09.521679 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 03:15:09.521697 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 03:15:09.521726 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 03:15:09.521746 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 03:15:09.521767 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:15:09.521786 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 03:15:09.521819 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 03:15:09.521839 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 03:15:09.521858 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 03:15:09.521888 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 03:15:09.521918 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 03:15:09.521939 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 03:15:09.521958 systemd[1]: Stopped verity-setup.service. Jan 20 03:15:09.521977 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:15:09.522007 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 03:15:09.522038 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 03:15:09.522059 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 03:15:09.522079 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 03:15:09.522107 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 03:15:09.522130 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 03:15:09.522149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 03:15:09.522167 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 03:15:09.522186 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 03:15:09.522205 kernel: loop: module loaded Jan 20 03:15:09.522236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:15:09.522257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:15:09.522276 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:15:09.522295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:15:09.522313 kernel: fuse: init (API version 7.41) Jan 20 03:15:09.522332 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:15:09.522351 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:15:09.522370 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 03:15:09.522400 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 03:15:09.522429 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 03:15:09.522462 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 03:15:09.522481 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 03:15:09.522499 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 03:15:09.522518 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 03:15:09.522547 kernel: ACPI: bus type drm_connector registered Jan 20 03:15:09.522568 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 03:15:09.524775 systemd-journald[1196]: Collecting audit messages is disabled. Jan 20 03:15:09.524850 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 03:15:09.524875 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 03:15:09.524895 systemd-journald[1196]: Journal started Jan 20 03:15:09.524934 systemd-journald[1196]: Runtime Journal (/run/log/journal/7dded5f0822c447e8682c0d15a8d9382) is 4.7M, max 37.8M, 33.1M free. Jan 20 03:15:09.531643 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 03:15:09.107488 systemd[1]: Queued start job for default target multi-user.target. Jan 20 03:15:09.121723 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 03:15:09.122457 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 03:15:09.537646 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 03:15:09.541626 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 03:15:09.548600 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 03:15:09.548646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:15:09.555621 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 03:15:09.561631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:15:09.566633 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 03:15:09.569628 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:15:09.578647 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 03:15:09.590637 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 03:15:09.590686 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 03:15:09.595613 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 03:15:09.598319 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:15:09.601792 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:15:09.605077 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 03:15:09.607054 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 03:15:09.608219 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 03:15:09.637438 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 03:15:09.644957 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 03:15:09.652130 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 03:15:09.663704 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 03:15:09.671018 kernel: loop0: detected capacity change from 0 to 110984 Jan 20 03:15:09.715294 systemd-journald[1196]: Time spent on flushing to /var/log/journal/7dded5f0822c447e8682c0d15a8d9382 is 110.338ms for 1168 entries. Jan 20 03:15:09.715294 systemd-journald[1196]: System Journal (/var/log/journal/7dded5f0822c447e8682c0d15a8d9382) is 8M, max 584.8M, 576.8M free. Jan 20 03:15:09.880341 systemd-journald[1196]: Received client request to flush runtime journal. Jan 20 03:15:09.880420 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 03:15:09.880458 kernel: loop1: detected capacity change from 0 to 128560 Jan 20 03:15:09.880506 kernel: loop2: detected capacity change from 0 to 8 Jan 20 03:15:09.741387 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 03:15:09.758415 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 03:15:09.773868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 03:15:09.849702 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 20 03:15:09.849723 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jan 20 03:15:09.856176 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 03:15:09.885502 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 03:15:09.899944 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 03:15:09.903605 kernel: loop3: detected capacity change from 0 to 229808 Jan 20 03:15:09.955627 kernel: loop4: detected capacity change from 0 to 110984 Jan 20 03:15:09.982507 kernel: loop5: detected capacity change from 0 to 128560 Jan 20 03:15:10.005636 kernel: loop6: detected capacity change from 0 to 8 Jan 20 03:15:10.012846 kernel: loop7: detected capacity change from 0 to 229808 Jan 20 03:15:10.039043 (sd-merge)[1266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 20 03:15:10.040267 (sd-merge)[1266]: Merged extensions into '/usr'. Jan 20 03:15:10.048342 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 03:15:10.048369 systemd[1]: Reloading... Jan 20 03:15:10.245729 zram_generator::config[1294]: No configuration found. Jan 20 03:15:10.347167 ldconfig[1217]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 03:15:10.574145 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 03:15:10.574764 systemd[1]: Reloading finished in 523 ms. Jan 20 03:15:10.599973 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 03:15:10.601238 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 03:15:10.602363 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 03:15:10.615737 systemd[1]: Starting ensure-sysext.service... Jan 20 03:15:10.619745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 03:15:10.622305 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 03:15:10.637649 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Jan 20 03:15:10.637676 systemd[1]: Reloading... Jan 20 03:15:10.685750 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Jan 20 03:15:10.691868 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 03:15:10.691924 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 03:15:10.692405 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 03:15:10.692902 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 03:15:10.697449 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 03:15:10.698919 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 20 03:15:10.699032 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 20 03:15:10.711057 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:15:10.711084 systemd-tmpfiles[1351]: Skipping /boot Jan 20 03:15:10.757464 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 03:15:10.757483 systemd-tmpfiles[1351]: Skipping /boot Jan 20 03:15:10.779641 zram_generator::config[1386]: No configuration found. Jan 20 03:15:11.074637 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 03:15:11.153626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 20 03:15:11.172629 kernel: ACPI: button: Power Button [PWRF] Jan 20 03:15:11.222611 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 03:15:11.229631 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 03:15:11.248314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 03:15:11.250919 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 03:15:11.251429 systemd[1]: Reloading finished in 613 ms. Jan 20 03:15:11.269469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 03:15:11.288424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 03:15:11.365046 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:15:11.367893 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:15:11.371787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 03:15:11.373868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:15:11.375967 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 03:15:11.380964 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 03:15:11.386987 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 03:15:11.388190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:15:11.391098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 03:15:11.392868 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:15:11.396163 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 03:15:11.411010 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 03:15:11.419791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 03:15:11.429028 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 03:15:11.430200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:15:11.446226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:15:11.446597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 03:15:11.455796 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 03:15:11.456783 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 03:15:11.456849 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 03:15:11.456961 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 03:15:11.457672 systemd[1]: Finished ensure-sysext.service. Jan 20 03:15:11.471237 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 03:15:11.472945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 03:15:11.474781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 03:15:11.486970 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 03:15:11.496733 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 03:15:11.499639 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 03:15:11.499959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 03:15:11.503455 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 03:15:11.510169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 03:15:11.510497 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 03:15:11.511887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 03:15:11.516714 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 03:15:11.534161 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 03:15:11.534498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 03:15:11.536167 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 03:15:11.539407 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 03:15:11.554390 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 03:15:11.557880 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 03:15:11.574219 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 03:15:11.603854 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 03:15:11.617292 augenrules[1523]: No rules Jan 20 03:15:11.627994 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:15:11.628350 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:15:11.720682 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 03:15:11.879727 systemd-networkd[1484]: lo: Link UP Jan 20 03:15:11.879740 systemd-networkd[1484]: lo: Gained carrier Jan 20 03:15:11.882007 systemd-networkd[1484]: Enumeration completed Jan 20 03:15:11.882615 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:15:11.882628 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 03:15:11.884057 systemd-networkd[1484]: eth0: Link UP Jan 20 03:15:11.884296 systemd-networkd[1484]: eth0: Gained carrier Jan 20 03:15:11.884323 systemd-networkd[1484]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 03:15:11.899643 systemd-networkd[1484]: eth0: DHCPv4 address 10.230.49.118/30, gateway 10.230.49.117 acquired from 10.230.49.117 Jan 20 03:15:11.914340 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 03:15:11.915807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 03:15:11.916809 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 03:15:11.918445 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 03:15:11.923070 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 03:15:11.925424 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 03:15:11.926552 systemd-resolved[1486]: Positive Trust Anchors: Jan 20 03:15:11.926569 systemd-resolved[1486]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 03:15:11.927689 systemd-resolved[1486]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 03:15:11.940371 systemd-resolved[1486]: Using system hostname 'srv-jqch3.gb1.brightbox.com'. Jan 20 03:15:11.946004 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 03:15:11.947885 systemd[1]: Reached target network.target - Network. Jan 20 03:15:11.948532 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 03:15:11.949693 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 03:15:11.950689 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 03:15:11.951580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 03:15:11.952521 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 20 03:15:11.953616 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 03:15:11.954518 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 03:15:11.955304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 03:15:11.956066 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 03:15:11.956115 systemd[1]: Reached target paths.target - Path Units. Jan 20 03:15:11.956739 systemd[1]: Reached target timers.target - Timer Units. Jan 20 03:15:11.958237 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 03:15:11.961052 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 03:15:11.965145 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 03:15:11.966112 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 03:15:11.966864 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 03:15:11.969724 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 03:15:11.970763 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 03:15:11.972515 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 03:15:11.973485 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 03:15:11.975758 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 03:15:11.976416 systemd[1]: Reached target basic.target - Basic System. Jan 20 03:15:11.977191 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:15:11.977255 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 03:15:11.980707 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 03:15:11.983850 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 03:15:11.991574 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 03:15:11.995862 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 03:15:11.998542 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 03:15:12.003026 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:12.004237 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 03:15:12.004934 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 03:15:12.013842 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 20 03:15:12.018363 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 03:15:12.027008 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 03:15:12.035954 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 03:15:12.039090 oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 20 03:15:12.046183 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing passwd entry cache Jan 20 03:15:12.040404 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 03:15:12.048184 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 03:15:12.051926 jq[1552]: false Jan 20 03:15:12.051547 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 03:15:12.057151 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 03:15:12.058689 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 03:15:12.062874 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 03:15:12.074551 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 03:15:12.076182 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 03:15:12.077702 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 03:15:12.091286 extend-filesystems[1553]: Found /dev/vda6 Jan 20 03:15:12.107188 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 03:15:12.113453 extend-filesystems[1553]: Found /dev/vda9 Jan 20 03:15:12.107509 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 03:15:12.118075 update_engine[1566]: I20260120 03:15:12.114854 1566 main.cc:92] Flatcar Update Engine starting Jan 20 03:15:12.121482 extend-filesystems[1553]: Checking size of /dev/vda9 Jan 20 03:15:12.124925 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 03:15:12.126341 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 03:15:12.130422 oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 20 03:15:12.135663 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 20 03:15:12.137147 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting users, quitting Jan 20 03:15:12.137147 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:15:12.137147 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 20 03:15:12.137147 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 20 03:15:12.137147 google_oslogin_nss_cache[1555]: oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:15:12.137335 jq[1567]: true Jan 20 03:15:12.130458 oslogin_cache_refresh[1555]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 20 03:15:12.137713 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 20 03:15:12.130537 oslogin_cache_refresh[1555]: Refreshing group entry cache Jan 20 03:15:12.133751 oslogin_cache_refresh[1555]: Failure getting groups, quitting Jan 20 03:15:12.133765 oslogin_cache_refresh[1555]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 20 03:15:12.154829 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 03:15:12.170461 extend-filesystems[1553]: Resized partition /dev/vda9 Jan 20 03:15:12.181546 extend-filesystems[1595]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 03:15:12.193066 jq[1591]: true Jan 20 03:15:12.193226 tar[1572]: linux-amd64/LICENSE Jan 20 03:15:12.193226 tar[1572]: linux-amd64/helm Jan 20 03:15:12.198171 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 20 03:15:12.205363 dbus-daemon[1549]: [system] SELinux support is enabled Jan 20 03:15:12.206480 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 03:15:12.212443 systemd-logind[1564]: Watching system buttons on /dev/input/event3 (Power Button) Jan 20 03:15:12.212480 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 03:15:12.213919 systemd-logind[1564]: New seat seat0. Jan 20 03:15:12.214866 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 03:15:12.214909 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 03:15:12.215993 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 03:15:12.216051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 03:15:12.227809 systemd-timesyncd[1497]: Contacted time server 176.58.109.184:123 (0.flatcar.pool.ntp.org). Jan 20 03:15:12.227889 systemd-timesyncd[1497]: Initial clock synchronization to Tue 2026-01-20 03:15:12.158024 UTC. Jan 20 03:15:12.230150 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 03:15:12.241185 dbus-daemon[1549]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1484 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 20 03:15:12.253487 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 03:15:12.262088 update_engine[1566]: I20260120 03:15:12.261999 1566 update_check_scheduler.cc:74] Next update check in 5m22s Jan 20 03:15:12.263999 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 20 03:15:12.264918 systemd[1]: Started update-engine.service - Update Engine. Jan 20 03:15:12.311449 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 03:15:12.387784 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Jan 20 03:15:12.391235 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 03:15:12.401761 systemd[1]: Starting sshkeys.service... Jan 20 03:15:12.477651 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 03:15:12.483057 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 03:15:12.509298 containerd[1584]: time="2026-01-20T03:15:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 03:15:12.522905 containerd[1584]: time="2026-01-20T03:15:12.521786740Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 20 03:15:12.567264 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:12.567342 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 20 03:15:12.611179 extend-filesystems[1595]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 03:15:12.611179 extend-filesystems[1595]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 20 03:15:12.611179 extend-filesystems[1595]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 20 03:15:12.619138 extend-filesystems[1553]: Resized filesystem in /dev/vda9 Jan 20 03:15:12.613522 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 03:15:12.614345 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 03:15:12.648298 containerd[1584]: time="2026-01-20T03:15:12.647538039Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.39µs" Jan 20 03:15:12.648298 containerd[1584]: time="2026-01-20T03:15:12.647807657Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 03:15:12.648298 containerd[1584]: time="2026-01-20T03:15:12.647855698Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 03:15:12.648298 containerd[1584]: time="2026-01-20T03:15:12.648136712Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 03:15:12.648298 containerd[1584]: time="2026-01-20T03:15:12.648175705Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 03:15:12.648298 containerd[1584]: time="2026-01-20T03:15:12.648226439Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:15:12.648520 containerd[1584]: time="2026-01-20T03:15:12.648338027Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 03:15:12.648520 containerd[1584]: time="2026-01-20T03:15:12.648359015Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:15:12.652329 containerd[1584]: time="2026-01-20T03:15:12.648661374Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 03:15:12.652329 containerd[1584]: time="2026-01-20T03:15:12.648699188Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:15:12.652329 containerd[1584]: time="2026-01-20T03:15:12.648732037Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 03:15:12.652329 containerd[1584]: time="2026-01-20T03:15:12.648749300Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 03:15:12.662385 containerd[1584]: time="2026-01-20T03:15:12.662334120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 03:15:12.662813 containerd[1584]: time="2026-01-20T03:15:12.662785942Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:15:12.662869 containerd[1584]: time="2026-01-20T03:15:12.662833574Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 03:15:12.662869 containerd[1584]: time="2026-01-20T03:15:12.662850524Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 03:15:12.662958 containerd[1584]: time="2026-01-20T03:15:12.662887583Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 03:15:12.663341 containerd[1584]: time="2026-01-20T03:15:12.663310816Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 03:15:12.664312 containerd[1584]: time="2026-01-20T03:15:12.663436339Z" level=info msg="metadata content store policy set" policy=shared Jan 20 03:15:12.673844 containerd[1584]: time="2026-01-20T03:15:12.673803266Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 03:15:12.673980 containerd[1584]: time="2026-01-20T03:15:12.673877910Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 03:15:12.673980 containerd[1584]: time="2026-01-20T03:15:12.673900340Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 03:15:12.673980 containerd[1584]: time="2026-01-20T03:15:12.673928238Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 03:15:12.673980 containerd[1584]: time="2026-01-20T03:15:12.673954708Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 03:15:12.673980 containerd[1584]: time="2026-01-20T03:15:12.673971324Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 03:15:12.674196 containerd[1584]: time="2026-01-20T03:15:12.674012842Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 03:15:12.674196 containerd[1584]: time="2026-01-20T03:15:12.674047277Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 03:15:12.674196 containerd[1584]: time="2026-01-20T03:15:12.674064282Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 03:15:12.674196 containerd[1584]: time="2026-01-20T03:15:12.674080812Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 03:15:12.674196 containerd[1584]: time="2026-01-20T03:15:12.674094698Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 03:15:12.674196 containerd[1584]: time="2026-01-20T03:15:12.674112323Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 03:15:12.674386 containerd[1584]: time="2026-01-20T03:15:12.674260429Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 03:15:12.674386 containerd[1584]: time="2026-01-20T03:15:12.674299978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 03:15:12.674386 containerd[1584]: time="2026-01-20T03:15:12.674322010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 03:15:12.674386 containerd[1584]: time="2026-01-20T03:15:12.674353365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 03:15:12.674386 containerd[1584]: time="2026-01-20T03:15:12.674374713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 03:15:12.674559 containerd[1584]: time="2026-01-20T03:15:12.674391000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 03:15:12.674559 containerd[1584]: time="2026-01-20T03:15:12.674415119Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 03:15:12.674559 containerd[1584]: time="2026-01-20T03:15:12.674434235Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 03:15:12.674559 containerd[1584]: time="2026-01-20T03:15:12.674451358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 03:15:12.674559 containerd[1584]: time="2026-01-20T03:15:12.674467965Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 03:15:12.674559 containerd[1584]: time="2026-01-20T03:15:12.674483284Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 03:15:12.681221 containerd[1584]: time="2026-01-20T03:15:12.674579818Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 03:15:12.681221 containerd[1584]: time="2026-01-20T03:15:12.676655475Z" level=info msg="Start snapshots syncer" Jan 20 03:15:12.681221 containerd[1584]: time="2026-01-20T03:15:12.676700303Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677123131Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677194341Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677277204Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677419168Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677453561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677471251Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677486881Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677533343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677555679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.677590914Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679787459Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679823007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679849749Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679903435Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679930652Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679944829Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679960054Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.679981981Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680028613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680068490Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680093578Z" level=info msg="runtime interface created" Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680104062Z" level=info msg="created NRI interface" Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680116506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680132567Z" level=info msg="Connect containerd service" Jan 20 03:15:12.681342 containerd[1584]: time="2026-01-20T03:15:12.680159312Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 03:15:12.692196 containerd[1584]: time="2026-01-20T03:15:12.687755499Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 03:15:12.737058 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 20 03:15:12.744433 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 20 03:15:12.749039 dbus-daemon[1549]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1602 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 20 03:15:12.758804 systemd[1]: Starting polkit.service - Authorization Manager... Jan 20 03:15:12.767119 locksmithd[1603]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 03:15:12.881928 sshd_keygen[1590]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.891784310Z" level=info msg="Start subscribing containerd event" Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.891877882Z" level=info msg="Start recovering state" Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892060031Z" level=info msg="Start event monitor" Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892086279Z" level=info msg="Start cni network conf syncer for default" Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892105406Z" level=info msg="Start streaming server" Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892138595Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892157075Z" level=info msg="runtime interface starting up..." Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892170123Z" level=info msg="starting plugins..." Jan 20 03:15:12.892992 containerd[1584]: time="2026-01-20T03:15:12.892196614Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 03:15:12.898102 containerd[1584]: time="2026-01-20T03:15:12.897964352Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 03:15:12.898189 containerd[1584]: time="2026-01-20T03:15:12.898112750Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 03:15:12.898189 containerd[1584]: time="2026-01-20T03:15:12.898249278Z" level=info msg="containerd successfully booted in 0.391569s" Jan 20 03:15:12.901836 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 03:15:12.905071 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 03:15:12.948182 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 03:15:12.952231 polkitd[1638]: Started polkitd version 126 Jan 20 03:15:12.956100 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 03:15:12.959292 systemd[1]: Started sshd@0-10.230.49.118:22-20.161.92.111:37952.service - OpenSSH per-connection server daemon (20.161.92.111:37952). Jan 20 03:15:12.965939 polkitd[1638]: Loading rules from directory /etc/polkit-1/rules.d Jan 20 03:15:12.966464 polkitd[1638]: Loading rules from directory /run/polkit-1/rules.d Jan 20 03:15:12.968244 polkitd[1638]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 03:15:12.969038 polkitd[1638]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 20 03:15:12.969170 polkitd[1638]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 20 03:15:12.969332 polkitd[1638]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 20 03:15:12.971121 polkitd[1638]: Finished loading, compiling and executing 2 rules Jan 20 03:15:12.971870 systemd[1]: Started polkit.service - Authorization Manager. Jan 20 03:15:12.973549 dbus-daemon[1549]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 20 03:15:12.976769 polkitd[1638]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 20 03:15:12.978664 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 03:15:12.980203 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 03:15:12.986245 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 03:15:13.007494 systemd-hostnamed[1602]: Hostname set to (static) Jan 20 03:15:13.014743 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 03:15:13.024846 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 03:15:13.028406 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 03:15:13.030003 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 03:15:13.042635 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:13.209464 tar[1572]: linux-amd64/README.md Jan 20 03:15:13.229772 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 03:15:13.426872 systemd-networkd[1484]: eth0: Gained IPv6LL Jan 20 03:15:13.430892 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 03:15:13.432792 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 03:15:13.436357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:15:13.440922 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 03:15:13.474427 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 03:15:13.586298 sshd[1658]: Accepted publickey for core from 20.161.92.111 port 37952 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:13.590377 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:13.602640 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 03:15:13.605248 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 03:15:13.628105 systemd-logind[1564]: New session 1 of user core. Jan 20 03:15:13.639283 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 03:15:13.645452 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 03:15:13.663702 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 03:15:13.669522 systemd-logind[1564]: New session c1 of user core. Jan 20 03:15:13.681605 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:13.842756 systemd[1693]: Queued start job for default target default.target. Jan 20 03:15:13.849827 systemd[1693]: Created slice app.slice - User Application Slice. Jan 20 03:15:13.849870 systemd[1693]: Reached target paths.target - Paths. Jan 20 03:15:13.849939 systemd[1693]: Reached target timers.target - Timers. Jan 20 03:15:13.851788 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 03:15:13.887777 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 03:15:13.887956 systemd[1693]: Reached target sockets.target - Sockets. Jan 20 03:15:13.888032 systemd[1693]: Reached target basic.target - Basic System. Jan 20 03:15:13.888101 systemd[1693]: Reached target default.target - Main User Target. Jan 20 03:15:13.888155 systemd[1693]: Startup finished in 208ms. Jan 20 03:15:13.888822 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 03:15:13.900082 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 03:15:14.318026 systemd[1]: Started sshd@1-10.230.49.118:22-20.161.92.111:50902.service - OpenSSH per-connection server daemon (20.161.92.111:50902). Jan 20 03:15:14.454926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:15:14.469340 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:15:14.907718 sshd[1705]: Accepted publickey for core from 20.161.92.111 port 50902 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:14.909491 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:14.921098 systemd-logind[1564]: New session 2 of user core. Jan 20 03:15:14.928013 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 03:15:15.060683 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:15.071630 kubelet[1713]: E0120 03:15:15.068945 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:15:15.073116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:15:15.073369 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:15:15.074236 systemd[1]: kubelet.service: Consumed 1.058s CPU time, 265.7M memory peak. Jan 20 03:15:15.307699 sshd[1718]: Connection closed by 20.161.92.111 port 50902 Jan 20 03:15:15.308728 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:15.313562 systemd[1]: sshd@1-10.230.49.118:22-20.161.92.111:50902.service: Deactivated successfully. Jan 20 03:15:15.316523 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 03:15:15.318878 systemd-logind[1564]: Session 2 logged out. Waiting for processes to exit. Jan 20 03:15:15.321010 systemd-logind[1564]: Removed session 2. Jan 20 03:15:15.409467 systemd[1]: Started sshd@2-10.230.49.118:22-20.161.92.111:50918.service - OpenSSH per-connection server daemon (20.161.92.111:50918). Jan 20 03:15:15.550006 systemd-networkd[1484]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c5d:24:19ff:fee6:3176/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c5d:24:19ff:fee6:3176/64 assigned by NDisc. Jan 20 03:15:15.550027 systemd-networkd[1484]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 20 03:15:15.699625 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:15.998002 sshd[1727]: Accepted publickey for core from 20.161.92.111 port 50918 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:15.999971 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:16.008454 systemd-logind[1564]: New session 3 of user core. Jan 20 03:15:16.018939 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 03:15:16.401462 sshd[1732]: Connection closed by 20.161.92.111 port 50918 Jan 20 03:15:16.403847 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:16.409565 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Jan 20 03:15:16.410469 systemd[1]: sshd@2-10.230.49.118:22-20.161.92.111:50918.service: Deactivated successfully. Jan 20 03:15:16.413494 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 03:15:16.417125 systemd-logind[1564]: Removed session 3. Jan 20 03:15:18.083264 login[1672]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 20 03:15:18.093666 systemd-logind[1564]: New session 4 of user core. Jan 20 03:15:18.101086 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 03:15:18.120655 login[1671]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 20 03:15:18.131077 systemd-logind[1564]: New session 5 of user core. Jan 20 03:15:18.142162 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 03:15:19.079621 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:19.095119 coreos-metadata[1548]: Jan 20 03:15:19.094 WARN failed to locate config-drive, using the metadata service API instead Jan 20 03:15:19.120748 coreos-metadata[1548]: Jan 20 03:15:19.120 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 20 03:15:19.128659 coreos-metadata[1548]: Jan 20 03:15:19.128 INFO Fetch failed with 404: resource not found Jan 20 03:15:19.128778 coreos-metadata[1548]: Jan 20 03:15:19.128 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 03:15:19.129522 coreos-metadata[1548]: Jan 20 03:15:19.129 INFO Fetch successful Jan 20 03:15:19.129703 coreos-metadata[1548]: Jan 20 03:15:19.129 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 20 03:15:19.141807 coreos-metadata[1548]: Jan 20 03:15:19.141 INFO Fetch successful Jan 20 03:15:19.141982 coreos-metadata[1548]: Jan 20 03:15:19.141 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 20 03:15:19.156170 coreos-metadata[1548]: Jan 20 03:15:19.156 INFO Fetch successful Jan 20 03:15:19.156416 coreos-metadata[1548]: Jan 20 03:15:19.156 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 20 03:15:19.170098 coreos-metadata[1548]: Jan 20 03:15:19.170 INFO Fetch successful Jan 20 03:15:19.170336 coreos-metadata[1548]: Jan 20 03:15:19.170 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 20 03:15:19.188660 coreos-metadata[1548]: Jan 20 03:15:19.188 INFO Fetch successful Jan 20 03:15:19.231180 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 03:15:19.232861 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 03:15:19.716623 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 03:15:19.725671 coreos-metadata[1621]: Jan 20 03:15:19.725 WARN failed to locate config-drive, using the metadata service API instead Jan 20 03:15:19.747606 coreos-metadata[1621]: Jan 20 03:15:19.747 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 20 03:15:19.773140 coreos-metadata[1621]: Jan 20 03:15:19.773 INFO Fetch successful Jan 20 03:15:19.773487 coreos-metadata[1621]: Jan 20 03:15:19.773 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 03:15:19.809837 coreos-metadata[1621]: Jan 20 03:15:19.809 INFO Fetch successful Jan 20 03:15:19.827463 unknown[1621]: wrote ssh authorized keys file for user: core Jan 20 03:15:19.860326 update-ssh-keys[1772]: Updated "/home/core/.ssh/authorized_keys" Jan 20 03:15:19.862658 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 03:15:19.866670 systemd[1]: Finished sshkeys.service. Jan 20 03:15:19.869189 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 03:15:19.869901 systemd[1]: Startup finished in 3.475s (kernel) + 13.670s (initrd) + 11.608s (userspace) = 28.754s. Jan 20 03:15:25.214463 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 03:15:25.217049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:15:25.415228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:15:25.429223 (kubelet)[1783]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:15:25.509758 kubelet[1783]: E0120 03:15:25.509572 1783 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:15:25.515196 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:15:25.515444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:15:25.516376 systemd[1]: kubelet.service: Consumed 224ms CPU time, 110.7M memory peak. Jan 20 03:15:26.474817 systemd[1]: Started sshd@3-10.230.49.118:22-20.161.92.111:36122.service - OpenSSH per-connection server daemon (20.161.92.111:36122). Jan 20 03:15:27.067766 sshd[1791]: Accepted publickey for core from 20.161.92.111 port 36122 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:27.069351 sshd-session[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:27.075904 systemd-logind[1564]: New session 6 of user core. Jan 20 03:15:27.082781 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 03:15:27.473237 sshd[1794]: Connection closed by 20.161.92.111 port 36122 Jan 20 03:15:27.473558 sshd-session[1791]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:27.479216 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Jan 20 03:15:27.479949 systemd[1]: sshd@3-10.230.49.118:22-20.161.92.111:36122.service: Deactivated successfully. Jan 20 03:15:27.482689 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 03:15:27.485132 systemd-logind[1564]: Removed session 6. Jan 20 03:15:27.578883 systemd[1]: Started sshd@4-10.230.49.118:22-20.161.92.111:36126.service - OpenSSH per-connection server daemon (20.161.92.111:36126). Jan 20 03:15:28.172421 sshd[1800]: Accepted publickey for core from 20.161.92.111 port 36126 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:28.174121 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:28.180340 systemd-logind[1564]: New session 7 of user core. Jan 20 03:15:28.188942 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 03:15:28.568618 sshd[1803]: Connection closed by 20.161.92.111 port 36126 Jan 20 03:15:28.568564 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:28.573214 systemd[1]: sshd@4-10.230.49.118:22-20.161.92.111:36126.service: Deactivated successfully. Jan 20 03:15:28.575767 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 03:15:28.578374 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Jan 20 03:15:28.579910 systemd-logind[1564]: Removed session 7. Jan 20 03:15:28.672370 systemd[1]: Started sshd@5-10.230.49.118:22-20.161.92.111:36130.service - OpenSSH per-connection server daemon (20.161.92.111:36130). Jan 20 03:15:29.282634 sshd[1809]: Accepted publickey for core from 20.161.92.111 port 36130 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:29.283955 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:29.291833 systemd-logind[1564]: New session 8 of user core. Jan 20 03:15:29.298808 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 03:15:29.702886 sshd[1812]: Connection closed by 20.161.92.111 port 36130 Jan 20 03:15:29.703973 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:29.709500 systemd[1]: sshd@5-10.230.49.118:22-20.161.92.111:36130.service: Deactivated successfully. Jan 20 03:15:29.711758 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 03:15:29.712857 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Jan 20 03:15:29.714909 systemd-logind[1564]: Removed session 8. Jan 20 03:15:29.798566 systemd[1]: Started sshd@6-10.230.49.118:22-20.161.92.111:36132.service - OpenSSH per-connection server daemon (20.161.92.111:36132). Jan 20 03:15:30.379389 sshd[1818]: Accepted publickey for core from 20.161.92.111 port 36132 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:30.381137 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:30.388018 systemd-logind[1564]: New session 9 of user core. Jan 20 03:15:30.397774 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 03:15:30.707792 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 03:15:30.708263 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:15:30.722075 sudo[1822]: pam_unix(sudo:session): session closed for user root Jan 20 03:15:30.812334 sshd[1821]: Connection closed by 20.161.92.111 port 36132 Jan 20 03:15:30.813381 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:30.819746 systemd[1]: sshd@6-10.230.49.118:22-20.161.92.111:36132.service: Deactivated successfully. Jan 20 03:15:30.822581 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 03:15:30.824169 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Jan 20 03:15:30.826515 systemd-logind[1564]: Removed session 9. Jan 20 03:15:30.921621 systemd[1]: Started sshd@7-10.230.49.118:22-20.161.92.111:36146.service - OpenSSH per-connection server daemon (20.161.92.111:36146). Jan 20 03:15:31.534462 sshd[1828]: Accepted publickey for core from 20.161.92.111 port 36146 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:31.536770 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:31.544020 systemd-logind[1564]: New session 10 of user core. Jan 20 03:15:31.554790 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 03:15:31.861160 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 03:15:31.862282 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:15:31.876824 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 20 03:15:31.885542 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 03:15:31.886031 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:15:31.900892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 03:15:31.946300 augenrules[1855]: No rules Jan 20 03:15:31.947101 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 03:15:31.947577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 03:15:31.949127 sudo[1832]: pam_unix(sudo:session): session closed for user root Jan 20 03:15:32.040846 sshd[1831]: Connection closed by 20.161.92.111 port 36146 Jan 20 03:15:32.041736 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Jan 20 03:15:32.046882 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Jan 20 03:15:32.048019 systemd[1]: sshd@7-10.230.49.118:22-20.161.92.111:36146.service: Deactivated successfully. Jan 20 03:15:32.050940 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 03:15:32.053734 systemd-logind[1564]: Removed session 10. Jan 20 03:15:32.140143 systemd[1]: Started sshd@8-10.230.49.118:22-20.161.92.111:36152.service - OpenSSH per-connection server daemon (20.161.92.111:36152). Jan 20 03:15:32.736700 sshd[1864]: Accepted publickey for core from 20.161.92.111 port 36152 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:15:32.736448 sshd-session[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:15:32.742744 systemd-logind[1564]: New session 11 of user core. Jan 20 03:15:32.756222 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 03:15:33.054203 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 03:15:33.054747 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 03:15:33.518509 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 03:15:33.531128 (dockerd)[1886]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 03:15:33.856924 systemd[1]: Started sshd@9-10.230.49.118:22-164.92.217.44:53474.service - OpenSSH per-connection server daemon (164.92.217.44:53474). Jan 20 03:15:33.902217 dockerd[1886]: time="2026-01-20T03:15:33.901791185Z" level=info msg="Starting up" Jan 20 03:15:33.903963 dockerd[1886]: time="2026-01-20T03:15:33.903931874Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 03:15:33.922477 dockerd[1886]: time="2026-01-20T03:15:33.922391978Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 03:15:33.970219 dockerd[1886]: time="2026-01-20T03:15:33.970101121Z" level=info msg="Loading containers: start." Jan 20 03:15:33.984621 kernel: Initializing XFRM netlink socket Jan 20 03:15:34.008000 sshd[1893]: Invalid user search from 164.92.217.44 port 53474 Jan 20 03:15:34.045501 sshd[1893]: Connection closed by invalid user search 164.92.217.44 port 53474 [preauth] Jan 20 03:15:34.048206 systemd[1]: sshd@9-10.230.49.118:22-164.92.217.44:53474.service: Deactivated successfully. Jan 20 03:15:34.305163 systemd-networkd[1484]: docker0: Link UP Jan 20 03:15:34.309209 dockerd[1886]: time="2026-01-20T03:15:34.309143889Z" level=info msg="Loading containers: done." Jan 20 03:15:34.331096 dockerd[1886]: time="2026-01-20T03:15:34.330334597Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 03:15:34.331096 dockerd[1886]: time="2026-01-20T03:15:34.330439001Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 03:15:34.331096 dockerd[1886]: time="2026-01-20T03:15:34.330560771Z" level=info msg="Initializing buildkit" Jan 20 03:15:34.330641 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3664457970-merged.mount: Deactivated successfully. Jan 20 03:15:34.358656 dockerd[1886]: time="2026-01-20T03:15:34.358540334Z" level=info msg="Completed buildkit initialization" Jan 20 03:15:34.367949 dockerd[1886]: time="2026-01-20T03:15:34.367891233Z" level=info msg="Daemon has completed initialization" Jan 20 03:15:34.368079 dockerd[1886]: time="2026-01-20T03:15:34.367994333Z" level=info msg="API listen on /run/docker.sock" Jan 20 03:15:34.369291 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 03:15:35.532037 containerd[1584]: time="2026-01-20T03:15:35.531935634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 03:15:35.714447 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 03:15:35.716816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:15:35.940323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:15:35.951263 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:15:36.039908 kubelet[2115]: E0120 03:15:36.039805 2115 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:15:36.043031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:15:36.043272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:15:36.044040 systemd[1]: kubelet.service: Consumed 237ms CPU time, 109.3M memory peak. Jan 20 03:15:36.366095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780098720.mount: Deactivated successfully. Jan 20 03:15:40.697056 containerd[1584]: time="2026-01-20T03:15:40.695728318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:40.697056 containerd[1584]: time="2026-01-20T03:15:40.696910806Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 20 03:15:40.697056 containerd[1584]: time="2026-01-20T03:15:40.696987408Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:40.700176 containerd[1584]: time="2026-01-20T03:15:40.700132049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:40.701654 containerd[1584]: time="2026-01-20T03:15:40.701618898Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 5.169441736s" Jan 20 03:15:40.701733 containerd[1584]: time="2026-01-20T03:15:40.701669091Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 20 03:15:40.702606 containerd[1584]: time="2026-01-20T03:15:40.702544044Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 03:15:45.592267 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 20 03:15:46.214653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 03:15:46.218210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:15:46.413773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:15:46.422980 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:15:46.545878 containerd[1584]: time="2026-01-20T03:15:46.544471970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:46.546553 containerd[1584]: time="2026-01-20T03:15:46.545574369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 20 03:15:46.546807 containerd[1584]: time="2026-01-20T03:15:46.546777022Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:46.552985 containerd[1584]: time="2026-01-20T03:15:46.552946484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:46.554707 kubelet[2195]: E0120 03:15:46.554656 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:15:46.554988 containerd[1584]: time="2026-01-20T03:15:46.554942622Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 5.852361728s" Jan 20 03:15:46.554988 containerd[1584]: time="2026-01-20T03:15:46.554974969Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 20 03:15:46.556821 containerd[1584]: time="2026-01-20T03:15:46.556781939Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 03:15:46.559655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:15:46.559865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:15:46.560366 systemd[1]: kubelet.service: Consumed 213ms CPU time, 111M memory peak. Jan 20 03:15:50.206978 containerd[1584]: time="2026-01-20T03:15:50.206888562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:50.208605 containerd[1584]: time="2026-01-20T03:15:50.208537312Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 20 03:15:50.209639 containerd[1584]: time="2026-01-20T03:15:50.209335045Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:50.212984 containerd[1584]: time="2026-01-20T03:15:50.212912394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:50.216932 containerd[1584]: time="2026-01-20T03:15:50.216883126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 3.660055416s" Jan 20 03:15:50.217022 containerd[1584]: time="2026-01-20T03:15:50.216932008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 20 03:15:50.219321 containerd[1584]: time="2026-01-20T03:15:50.218838208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 03:15:51.673209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228317460.mount: Deactivated successfully. Jan 20 03:15:52.450791 containerd[1584]: time="2026-01-20T03:15:52.450712454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:52.452790 containerd[1584]: time="2026-01-20T03:15:52.452747625Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 20 03:15:52.453580 containerd[1584]: time="2026-01-20T03:15:52.453528155Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:52.456777 containerd[1584]: time="2026-01-20T03:15:52.456731864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:52.458579 containerd[1584]: time="2026-01-20T03:15:52.458477951Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 2.239602918s" Jan 20 03:15:52.458579 containerd[1584]: time="2026-01-20T03:15:52.458560585Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 20 03:15:52.459412 containerd[1584]: time="2026-01-20T03:15:52.459384202Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 03:15:53.031967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899922459.mount: Deactivated successfully. Jan 20 03:15:54.507888 containerd[1584]: time="2026-01-20T03:15:54.507819989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:54.509087 containerd[1584]: time="2026-01-20T03:15:54.509056135Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 20 03:15:54.510798 containerd[1584]: time="2026-01-20T03:15:54.510051709Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:54.513507 containerd[1584]: time="2026-01-20T03:15:54.513474465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:15:54.515263 containerd[1584]: time="2026-01-20T03:15:54.515228552Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.055805929s" Jan 20 03:15:54.515409 containerd[1584]: time="2026-01-20T03:15:54.515364614Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 20 03:15:54.516557 containerd[1584]: time="2026-01-20T03:15:54.516508972Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 03:15:55.045338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4085142806.mount: Deactivated successfully. Jan 20 03:15:55.064795 containerd[1584]: time="2026-01-20T03:15:55.064684031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:15:55.065597 containerd[1584]: time="2026-01-20T03:15:55.065552380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 20 03:15:55.066922 containerd[1584]: time="2026-01-20T03:15:55.066468165Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:15:55.069107 containerd[1584]: time="2026-01-20T03:15:55.069053564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 03:15:55.070540 containerd[1584]: time="2026-01-20T03:15:55.070510540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 553.775464ms" Jan 20 03:15:55.070708 containerd[1584]: time="2026-01-20T03:15:55.070668556Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 03:15:55.071661 containerd[1584]: time="2026-01-20T03:15:55.071551965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 03:15:55.585920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785598866.mount: Deactivated successfully. Jan 20 03:15:56.715221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 03:15:56.718488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:15:56.940475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:15:56.952107 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 03:15:57.070444 kubelet[2326]: E0120 03:15:57.070244 2326 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 03:15:57.074713 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 03:15:57.074989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 03:15:57.075962 systemd[1]: kubelet.service: Consumed 251ms CPU time, 107.9M memory peak. Jan 20 03:15:57.126245 update_engine[1566]: I20260120 03:15:57.126112 1566 update_attempter.cc:509] Updating boot flags... Jan 20 03:16:00.786907 containerd[1584]: time="2026-01-20T03:16:00.786834356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:00.788399 containerd[1584]: time="2026-01-20T03:16:00.788367098Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 20 03:16:00.789227 containerd[1584]: time="2026-01-20T03:16:00.789156792Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:00.792567 containerd[1584]: time="2026-01-20T03:16:00.792533328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:00.794396 containerd[1584]: time="2026-01-20T03:16:00.794180596Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 5.722394667s" Jan 20 03:16:00.794396 containerd[1584]: time="2026-01-20T03:16:00.794219400Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 20 03:16:04.287859 systemd[1]: Started sshd@10-10.230.49.118:22-164.92.217.44:54022.service - OpenSSH per-connection server daemon (164.92.217.44:54022). Jan 20 03:16:04.403841 sshd[2378]: Invalid user search from 164.92.217.44 port 54022 Jan 20 03:16:04.509754 sshd[2378]: Connection closed by invalid user search 164.92.217.44 port 54022 [preauth] Jan 20 03:16:04.511049 systemd[1]: sshd@10-10.230.49.118:22-164.92.217.44:54022.service: Deactivated successfully. Jan 20 03:16:05.697005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:16:05.697293 systemd[1]: kubelet.service: Consumed 251ms CPU time, 107.9M memory peak. Jan 20 03:16:05.700633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:16:05.736905 systemd[1]: Reload requested from client PID 2390 ('systemctl') (unit session-11.scope)... Jan 20 03:16:05.737097 systemd[1]: Reloading... Jan 20 03:16:05.944251 zram_generator::config[2431]: No configuration found. Jan 20 03:16:06.248993 systemd[1]: Reloading finished in 511 ms. Jan 20 03:16:06.335665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:16:06.338285 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 03:16:06.338714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:16:06.338782 systemd[1]: kubelet.service: Consumed 155ms CPU time, 97.9M memory peak. Jan 20 03:16:06.341940 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:16:06.519080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:16:06.530233 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:16:06.634439 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:16:06.634439 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:16:06.634439 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:16:06.635049 kubelet[2504]: I0120 03:16:06.634471 2504 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:16:06.871776 kubelet[2504]: I0120 03:16:06.870738 2504 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 03:16:06.871776 kubelet[2504]: I0120 03:16:06.870792 2504 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:16:06.871776 kubelet[2504]: I0120 03:16:06.871114 2504 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:16:06.906680 kubelet[2504]: E0120 03:16:06.906623 2504 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.49.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 03:16:06.908327 kubelet[2504]: I0120 03:16:06.908302 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:16:06.933012 kubelet[2504]: I0120 03:16:06.932009 2504 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:16:06.939433 kubelet[2504]: I0120 03:16:06.939406 2504 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 03:16:06.946004 kubelet[2504]: I0120 03:16:06.945961 2504 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:16:06.949199 kubelet[2504]: I0120 03:16:06.946005 2504 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jqch3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:16:06.949464 kubelet[2504]: I0120 03:16:06.949207 2504 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:16:06.949464 kubelet[2504]: I0120 03:16:06.949228 2504 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 03:16:06.950353 kubelet[2504]: I0120 03:16:06.950306 2504 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:16:06.953878 kubelet[2504]: I0120 03:16:06.953486 2504 kubelet.go:480] "Attempting to sync node with API server" Jan 20 03:16:06.953878 kubelet[2504]: I0120 03:16:06.953520 2504 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:16:06.953878 kubelet[2504]: I0120 03:16:06.953569 2504 kubelet.go:386] "Adding apiserver pod source" Jan 20 03:16:06.953878 kubelet[2504]: I0120 03:16:06.953611 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:16:06.961656 kubelet[2504]: E0120 03:16:06.961619 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.49.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jqch3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 03:16:06.961930 kubelet[2504]: I0120 03:16:06.961904 2504 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:16:06.962733 kubelet[2504]: I0120 03:16:06.962710 2504 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:16:06.964606 kubelet[2504]: W0120 03:16:06.964024 2504 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 03:16:06.972093 kubelet[2504]: E0120 03:16:06.972061 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.49.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 03:16:06.977449 kubelet[2504]: I0120 03:16:06.977425 2504 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 03:16:06.977701 kubelet[2504]: I0120 03:16:06.977676 2504 server.go:1289] "Started kubelet" Jan 20 03:16:06.982112 kubelet[2504]: I0120 03:16:06.982088 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:16:06.996066 kubelet[2504]: E0120 03:16:06.991715 2504 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.49.118:6443/api/v1/namespaces/default/events\": dial tcp 10.230.49.118:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jqch3.gb1.brightbox.com.188c520c6d5b926d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jqch3.gb1.brightbox.com,UID:srv-jqch3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jqch3.gb1.brightbox.com,},FirstTimestamp:2026-01-20 03:16:06.977565293 +0000 UTC m=+0.442196224,LastTimestamp:2026-01-20 03:16:06.977565293 +0000 UTC m=+0.442196224,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jqch3.gb1.brightbox.com,}" Jan 20 03:16:06.996066 kubelet[2504]: I0120 03:16:06.994705 2504 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:16:06.998140 kubelet[2504]: I0120 03:16:06.998017 2504 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 03:16:06.998453 kubelet[2504]: I0120 03:16:06.998198 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:16:06.999785 kubelet[2504]: I0120 03:16:06.999341 2504 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:16:06.999785 kubelet[2504]: E0120 03:16:06.999553 2504 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jqch3.gb1.brightbox.com\" not found" Jan 20 03:16:07.005064 kubelet[2504]: I0120 03:16:07.004963 2504 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 03:16:07.005208 kubelet[2504]: I0120 03:16:06.998119 2504 server.go:317] "Adding debug handlers to kubelet server" Jan 20 03:16:07.008478 kubelet[2504]: I0120 03:16:07.006937 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:16:07.008942 kubelet[2504]: I0120 03:16:07.008753 2504 reconciler.go:26] "Reconciler: start to sync state" Jan 20 03:16:07.011146 kubelet[2504]: E0120 03:16:07.011097 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jqch3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.118:6443: connect: connection refused" interval="200ms" Jan 20 03:16:07.012728 kubelet[2504]: I0120 03:16:07.012705 2504 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:16:07.013083 kubelet[2504]: I0120 03:16:07.012825 2504 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:16:07.016087 kubelet[2504]: I0120 03:16:07.016026 2504 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:16:07.016417 kubelet[2504]: I0120 03:16:07.016371 2504 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 03:16:07.029077 kubelet[2504]: E0120 03:16:07.029050 2504 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:16:07.030450 kubelet[2504]: E0120 03:16:07.030325 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.49.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 03:16:07.048202 kubelet[2504]: I0120 03:16:07.047808 2504 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 03:16:07.048202 kubelet[2504]: I0120 03:16:07.047861 2504 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 03:16:07.048202 kubelet[2504]: I0120 03:16:07.047901 2504 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:16:07.048202 kubelet[2504]: I0120 03:16:07.047919 2504 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 03:16:07.048202 kubelet[2504]: E0120 03:16:07.047979 2504 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:16:07.052361 kubelet[2504]: E0120 03:16:07.051652 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.49.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 03:16:07.065500 kubelet[2504]: I0120 03:16:07.065476 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:16:07.065660 kubelet[2504]: I0120 03:16:07.065641 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:16:07.065789 kubelet[2504]: I0120 03:16:07.065772 2504 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:16:07.067911 kubelet[2504]: I0120 03:16:07.067889 2504 policy_none.go:49] "None policy: Start" Jan 20 03:16:07.068046 kubelet[2504]: I0120 03:16:07.068027 2504 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 03:16:07.068179 kubelet[2504]: I0120 03:16:07.068151 2504 state_mem.go:35] "Initializing new in-memory state store" Jan 20 03:16:07.076603 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 03:16:07.087759 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 03:16:07.092091 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 03:16:07.101419 kubelet[2504]: E0120 03:16:07.100202 2504 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jqch3.gb1.brightbox.com\" not found" Jan 20 03:16:07.103526 kubelet[2504]: E0120 03:16:07.103497 2504 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:16:07.103815 kubelet[2504]: I0120 03:16:07.103789 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:16:07.103908 kubelet[2504]: I0120 03:16:07.103821 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:16:07.104382 kubelet[2504]: I0120 03:16:07.104354 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:16:07.105538 kubelet[2504]: E0120 03:16:07.105506 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:16:07.105671 kubelet[2504]: E0120 03:16:07.105579 2504 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-jqch3.gb1.brightbox.com\" not found" Jan 20 03:16:07.176115 systemd[1]: Created slice kubepods-burstable-pod3b6395778eb8a0bc5eb8ab2bcb29928b.slice - libcontainer container kubepods-burstable-pod3b6395778eb8a0bc5eb8ab2bcb29928b.slice. Jan 20 03:16:07.193920 kubelet[2504]: E0120 03:16:07.193870 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.198579 systemd[1]: Created slice kubepods-burstable-pod868bf72a905aef9cca2d6fd076bb6e00.slice - libcontainer container kubepods-burstable-pod868bf72a905aef9cca2d6fd076bb6e00.slice. Jan 20 03:16:07.206138 kubelet[2504]: I0120 03:16:07.206083 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.207108 kubelet[2504]: E0120 03:16:07.206575 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.118:6443/api/v1/nodes\": dial tcp 10.230.49.118:6443: connect: connection refused" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.207631 kubelet[2504]: E0120 03:16:07.207579 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210518 kubelet[2504]: I0120 03:16:07.209227 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210518 kubelet[2504]: I0120 03:16:07.209268 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/395fabd2fa682771cbbd6806bf561d49-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" (UID: \"395fabd2fa682771cbbd6806bf561d49\") " pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210518 kubelet[2504]: I0120 03:16:07.209301 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-ca-certs\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210518 kubelet[2504]: I0120 03:16:07.209328 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-flexvolume-dir\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210758 kubelet[2504]: I0120 03:16:07.209354 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-kubeconfig\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210758 kubelet[2504]: I0120 03:16:07.209419 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/868bf72a905aef9cca2d6fd076bb6e00-kubeconfig\") pod \"kube-scheduler-srv-jqch3.gb1.brightbox.com\" (UID: \"868bf72a905aef9cca2d6fd076bb6e00\") " pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210758 kubelet[2504]: I0120 03:16:07.209444 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/395fabd2fa682771cbbd6806bf561d49-ca-certs\") pod \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" (UID: \"395fabd2fa682771cbbd6806bf561d49\") " pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210758 kubelet[2504]: I0120 03:16:07.209469 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/395fabd2fa682771cbbd6806bf561d49-k8s-certs\") pod \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" (UID: \"395fabd2fa682771cbbd6806bf561d49\") " pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.210758 kubelet[2504]: I0120 03:16:07.209495 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-k8s-certs\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.212620 kubelet[2504]: E0120 03:16:07.211973 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jqch3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.118:6443: connect: connection refused" interval="400ms" Jan 20 03:16:07.212902 systemd[1]: Created slice kubepods-burstable-pod395fabd2fa682771cbbd6806bf561d49.slice - libcontainer container kubepods-burstable-pod395fabd2fa682771cbbd6806bf561d49.slice. Jan 20 03:16:07.215485 kubelet[2504]: E0120 03:16:07.215413 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.408846 kubelet[2504]: I0120 03:16:07.408777 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.409212 kubelet[2504]: E0120 03:16:07.409171 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.118:6443/api/v1/nodes\": dial tcp 10.230.49.118:6443: connect: connection refused" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.496284 containerd[1584]: time="2026-01-20T03:16:07.496233588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jqch3.gb1.brightbox.com,Uid:3b6395778eb8a0bc5eb8ab2bcb29928b,Namespace:kube-system,Attempt:0,}" Jan 20 03:16:07.509221 containerd[1584]: time="2026-01-20T03:16:07.509186089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jqch3.gb1.brightbox.com,Uid:868bf72a905aef9cca2d6fd076bb6e00,Namespace:kube-system,Attempt:0,}" Jan 20 03:16:07.522850 containerd[1584]: time="2026-01-20T03:16:07.522804123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jqch3.gb1.brightbox.com,Uid:395fabd2fa682771cbbd6806bf561d49,Namespace:kube-system,Attempt:0,}" Jan 20 03:16:07.630954 kubelet[2504]: E0120 03:16:07.630783 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jqch3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.118:6443: connect: connection refused" interval="800ms" Jan 20 03:16:07.650175 containerd[1584]: time="2026-01-20T03:16:07.650111416Z" level=info msg="connecting to shim 3575c9d3221144e5159ec57b91ad85324ba32f7de0aba24357e1d893e858fd5d" address="unix:///run/containerd/s/650a83f61178037d7a417eb4f2b3b8be21970a67bc08294e978c6106cc66dc25" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:07.651078 containerd[1584]: time="2026-01-20T03:16:07.651047047Z" level=info msg="connecting to shim fdf0d4db5543fe981bf110ee07c290b9a02e7cfdc176af431ac26064248a45ee" address="unix:///run/containerd/s/3d4fa44f6acaa6debc03ee520d37c8ebd9de5c93d0777dd7a332ddd7012954e4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:07.658043 containerd[1584]: time="2026-01-20T03:16:07.658009866Z" level=info msg="connecting to shim e09305b18b4f156ed21b10b9d5e7c6e588346183e756b1df1e72bbe48773a7f0" address="unix:///run/containerd/s/96a13ae828610719926b6d15d0ca00b18f45a5b78c0e3fff9fa366c6fd220845" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:07.783834 systemd[1]: Started cri-containerd-3575c9d3221144e5159ec57b91ad85324ba32f7de0aba24357e1d893e858fd5d.scope - libcontainer container 3575c9d3221144e5159ec57b91ad85324ba32f7de0aba24357e1d893e858fd5d. Jan 20 03:16:07.785978 systemd[1]: Started cri-containerd-e09305b18b4f156ed21b10b9d5e7c6e588346183e756b1df1e72bbe48773a7f0.scope - libcontainer container e09305b18b4f156ed21b10b9d5e7c6e588346183e756b1df1e72bbe48773a7f0. Jan 20 03:16:07.789425 systemd[1]: Started cri-containerd-fdf0d4db5543fe981bf110ee07c290b9a02e7cfdc176af431ac26064248a45ee.scope - libcontainer container fdf0d4db5543fe981bf110ee07c290b9a02e7cfdc176af431ac26064248a45ee. Jan 20 03:16:07.814114 kubelet[2504]: I0120 03:16:07.814081 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.815813 kubelet[2504]: E0120 03:16:07.815741 2504 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.118:6443/api/v1/nodes\": dial tcp 10.230.49.118:6443: connect: connection refused" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:07.888883 kubelet[2504]: E0120 03:16:07.888742 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.49.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 03:16:07.904141 containerd[1584]: time="2026-01-20T03:16:07.903868664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jqch3.gb1.brightbox.com,Uid:395fabd2fa682771cbbd6806bf561d49,Namespace:kube-system,Attempt:0,} returns sandbox id \"3575c9d3221144e5159ec57b91ad85324ba32f7de0aba24357e1d893e858fd5d\"" Jan 20 03:16:07.917472 containerd[1584]: time="2026-01-20T03:16:07.917425810Z" level=info msg="CreateContainer within sandbox \"3575c9d3221144e5159ec57b91ad85324ba32f7de0aba24357e1d893e858fd5d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 03:16:07.929986 containerd[1584]: time="2026-01-20T03:16:07.929943337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jqch3.gb1.brightbox.com,Uid:3b6395778eb8a0bc5eb8ab2bcb29928b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e09305b18b4f156ed21b10b9d5e7c6e588346183e756b1df1e72bbe48773a7f0\"" Jan 20 03:16:07.935846 containerd[1584]: time="2026-01-20T03:16:07.935792546Z" level=info msg="CreateContainer within sandbox \"e09305b18b4f156ed21b10b9d5e7c6e588346183e756b1df1e72bbe48773a7f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 03:16:07.940038 containerd[1584]: time="2026-01-20T03:16:07.939925728Z" level=info msg="Container 641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:07.943729 containerd[1584]: time="2026-01-20T03:16:07.943658144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jqch3.gb1.brightbox.com,Uid:868bf72a905aef9cca2d6fd076bb6e00,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdf0d4db5543fe981bf110ee07c290b9a02e7cfdc176af431ac26064248a45ee\"" Jan 20 03:16:07.953030 containerd[1584]: time="2026-01-20T03:16:07.952993405Z" level=info msg="Container 1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:07.956018 containerd[1584]: time="2026-01-20T03:16:07.955943300Z" level=info msg="CreateContainer within sandbox \"fdf0d4db5543fe981bf110ee07c290b9a02e7cfdc176af431ac26064248a45ee\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 03:16:07.957357 containerd[1584]: time="2026-01-20T03:16:07.957275697Z" level=info msg="CreateContainer within sandbox \"3575c9d3221144e5159ec57b91ad85324ba32f7de0aba24357e1d893e858fd5d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7\"" Jan 20 03:16:07.958339 containerd[1584]: time="2026-01-20T03:16:07.958181351Z" level=info msg="StartContainer for \"641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7\"" Jan 20 03:16:07.960675 containerd[1584]: time="2026-01-20T03:16:07.960644735Z" level=info msg="connecting to shim 641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7" address="unix:///run/containerd/s/650a83f61178037d7a417eb4f2b3b8be21970a67bc08294e978c6106cc66dc25" protocol=ttrpc version=3 Jan 20 03:16:07.961102 containerd[1584]: time="2026-01-20T03:16:07.960665681Z" level=info msg="CreateContainer within sandbox \"e09305b18b4f156ed21b10b9d5e7c6e588346183e756b1df1e72bbe48773a7f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9\"" Jan 20 03:16:07.961769 containerd[1584]: time="2026-01-20T03:16:07.961737757Z" level=info msg="StartContainer for \"1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9\"" Jan 20 03:16:07.964421 containerd[1584]: time="2026-01-20T03:16:07.964264409Z" level=info msg="connecting to shim 1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9" address="unix:///run/containerd/s/96a13ae828610719926b6d15d0ca00b18f45a5b78c0e3fff9fa366c6fd220845" protocol=ttrpc version=3 Jan 20 03:16:07.968448 containerd[1584]: time="2026-01-20T03:16:07.968398563Z" level=info msg="Container 6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:07.997968 containerd[1584]: time="2026-01-20T03:16:07.997713828Z" level=info msg="CreateContainer within sandbox \"fdf0d4db5543fe981bf110ee07c290b9a02e7cfdc176af431ac26064248a45ee\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8\"" Jan 20 03:16:08.000872 containerd[1584]: time="2026-01-20T03:16:08.000843002Z" level=info msg="StartContainer for \"6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8\"" Jan 20 03:16:08.004701 containerd[1584]: time="2026-01-20T03:16:08.004647767Z" level=info msg="connecting to shim 6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8" address="unix:///run/containerd/s/3d4fa44f6acaa6debc03ee520d37c8ebd9de5c93d0777dd7a332ddd7012954e4" protocol=ttrpc version=3 Jan 20 03:16:08.008809 systemd[1]: Started cri-containerd-641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7.scope - libcontainer container 641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7. Jan 20 03:16:08.018772 systemd[1]: Started cri-containerd-1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9.scope - libcontainer container 1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9. Jan 20 03:16:08.050759 systemd[1]: Started cri-containerd-6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8.scope - libcontainer container 6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8. Jan 20 03:16:08.083289 kubelet[2504]: E0120 03:16:08.083247 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.49.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 03:16:08.099872 kubelet[2504]: E0120 03:16:08.097560 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.49.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 03:16:08.149528 containerd[1584]: time="2026-01-20T03:16:08.149473342Z" level=info msg="StartContainer for \"641af0a2a9bc5d69465efd94cef5fa6e4230659973577243e494c098fa1180e7\" returns successfully" Jan 20 03:16:08.198197 containerd[1584]: time="2026-01-20T03:16:08.198082654Z" level=info msg="StartContainer for \"6146d34c323cc975d3025f952dda308bf7a56eed49201083b6770afc2cc41fd8\" returns successfully" Jan 20 03:16:08.199598 containerd[1584]: time="2026-01-20T03:16:08.199531069Z" level=info msg="StartContainer for \"1b15d47c75cbaccaf8c29dbc34564ba25bfccd6ab5c89a42500e20a56eede8c9\" returns successfully" Jan 20 03:16:08.431747 kubelet[2504]: E0120 03:16:08.431541 2504 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jqch3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.118:6443: connect: connection refused" interval="1.6s" Jan 20 03:16:08.538380 kubelet[2504]: E0120 03:16:08.538310 2504 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.49.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jqch3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.49.118:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 03:16:08.619832 kubelet[2504]: I0120 03:16:08.619750 2504 kubelet_node_status.go:75] "Attempting to register node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:09.097588 kubelet[2504]: E0120 03:16:09.097521 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:09.102938 kubelet[2504]: E0120 03:16:09.102719 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:09.104667 kubelet[2504]: E0120 03:16:09.104429 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:10.108663 kubelet[2504]: E0120 03:16:10.107492 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:10.108663 kubelet[2504]: E0120 03:16:10.107960 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:10.109212 kubelet[2504]: E0120 03:16:10.108915 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.109927 kubelet[2504]: E0120 03:16:11.109691 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.110422 kubelet[2504]: E0120 03:16:11.109937 2504 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.729710 kubelet[2504]: E0120 03:16:11.729656 2504 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-jqch3.gb1.brightbox.com\" not found" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.847614 kubelet[2504]: I0120 03:16:11.847533 2504 kubelet_node_status.go:78] "Successfully registered node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.849819 kubelet[2504]: E0120 03:16:11.849776 2504 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-jqch3.gb1.brightbox.com\": node \"srv-jqch3.gb1.brightbox.com\" not found" Jan 20 03:16:11.904246 kubelet[2504]: I0120 03:16:11.904191 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.918820 kubelet[2504]: E0120 03:16:11.918767 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.918820 kubelet[2504]: I0120 03:16:11.918814 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.922279 kubelet[2504]: E0120 03:16:11.922057 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jqch3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.922279 kubelet[2504]: I0120 03:16:11.922087 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.924065 kubelet[2504]: E0120 03:16:11.924043 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:11.973552 kubelet[2504]: I0120 03:16:11.973166 2504 apiserver.go:52] "Watching apiserver" Jan 20 03:16:12.005702 kubelet[2504]: I0120 03:16:12.005538 2504 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 03:16:12.441212 kubelet[2504]: I0120 03:16:12.441050 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:12.443949 kubelet[2504]: E0120 03:16:12.443909 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jqch3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:14.065803 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-11.scope)... Jan 20 03:16:14.066336 systemd[1]: Reloading... Jan 20 03:16:14.202661 zram_generator::config[2829]: No configuration found. Jan 20 03:16:14.548848 systemd[1]: Reloading finished in 481 ms. Jan 20 03:16:14.601526 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:16:14.614913 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 03:16:14.615372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:16:14.615457 systemd[1]: kubelet.service: Consumed 901ms CPU time, 128.2M memory peak. Jan 20 03:16:14.618016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 03:16:14.918239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 03:16:14.931437 (kubelet)[2893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 03:16:15.033753 kubelet[2893]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:16:15.033753 kubelet[2893]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 03:16:15.033753 kubelet[2893]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 03:16:15.034348 kubelet[2893]: I0120 03:16:15.033871 2893 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 03:16:15.056283 kubelet[2893]: I0120 03:16:15.056229 2893 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 03:16:15.056908 kubelet[2893]: I0120 03:16:15.056481 2893 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 03:16:15.057050 kubelet[2893]: I0120 03:16:15.057030 2893 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 03:16:15.062664 kubelet[2893]: I0120 03:16:15.062199 2893 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 03:16:15.068947 kubelet[2893]: I0120 03:16:15.068589 2893 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 03:16:15.079164 kubelet[2893]: I0120 03:16:15.078938 2893 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 03:16:15.085002 kubelet[2893]: I0120 03:16:15.084433 2893 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 03:16:15.086483 kubelet[2893]: I0120 03:16:15.085960 2893 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 03:16:15.086483 kubelet[2893]: I0120 03:16:15.085998 2893 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jqch3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 03:16:15.086483 kubelet[2893]: I0120 03:16:15.086238 2893 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 03:16:15.086483 kubelet[2893]: I0120 03:16:15.086252 2893 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 03:16:15.086483 kubelet[2893]: I0120 03:16:15.086324 2893 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:16:15.088791 kubelet[2893]: I0120 03:16:15.086552 2893 kubelet.go:480] "Attempting to sync node with API server" Jan 20 03:16:15.088791 kubelet[2893]: I0120 03:16:15.086606 2893 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 03:16:15.088791 kubelet[2893]: I0120 03:16:15.086728 2893 kubelet.go:386] "Adding apiserver pod source" Jan 20 03:16:15.088791 kubelet[2893]: I0120 03:16:15.088761 2893 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 03:16:15.098228 kubelet[2893]: I0120 03:16:15.098155 2893 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 20 03:16:15.105608 kubelet[2893]: I0120 03:16:15.105306 2893 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 03:16:15.120817 kubelet[2893]: I0120 03:16:15.120648 2893 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 03:16:15.120817 kubelet[2893]: I0120 03:16:15.120726 2893 server.go:1289] "Started kubelet" Jan 20 03:16:15.128279 kubelet[2893]: I0120 03:16:15.124561 2893 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 03:16:15.143962 kubelet[2893]: I0120 03:16:15.142518 2893 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 03:16:15.146828 kubelet[2893]: I0120 03:16:15.146409 2893 server.go:317] "Adding debug handlers to kubelet server" Jan 20 03:16:15.150615 kubelet[2893]: I0120 03:16:15.150428 2893 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 03:16:15.151792 kubelet[2893]: I0120 03:16:15.151165 2893 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 03:16:15.151792 kubelet[2893]: I0120 03:16:15.151774 2893 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 03:16:15.155757 kubelet[2893]: I0120 03:16:15.155454 2893 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 03:16:15.161804 kubelet[2893]: I0120 03:16:15.155788 2893 reconciler.go:26] "Reconciler: start to sync state" Jan 20 03:16:15.163269 kubelet[2893]: I0120 03:16:15.163203 2893 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 03:16:15.180468 kubelet[2893]: I0120 03:16:15.180081 2893 factory.go:223] Registration of the containerd container factory successfully Jan 20 03:16:15.180468 kubelet[2893]: I0120 03:16:15.180110 2893 factory.go:223] Registration of the systemd container factory successfully Jan 20 03:16:15.180468 kubelet[2893]: I0120 03:16:15.180316 2893 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 03:16:15.185871 kubelet[2893]: E0120 03:16:15.185842 2893 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 03:16:15.203225 kubelet[2893]: I0120 03:16:15.203087 2893 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 03:16:15.209430 kubelet[2893]: I0120 03:16:15.208746 2893 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 03:16:15.209430 kubelet[2893]: I0120 03:16:15.208772 2893 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 03:16:15.209430 kubelet[2893]: I0120 03:16:15.208806 2893 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 03:16:15.209430 kubelet[2893]: I0120 03:16:15.208817 2893 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 03:16:15.209430 kubelet[2893]: E0120 03:16:15.208871 2893 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 03:16:15.309888 kubelet[2893]: E0120 03:16:15.309842 2893 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 03:16:15.316686 kubelet[2893]: I0120 03:16:15.316633 2893 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.316813 2893 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.316847 2893 state_mem.go:36] "Initialized new in-memory state store" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.317080 2893 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.317099 2893 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.317140 2893 policy_none.go:49] "None policy: Start" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.317154 2893 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.317171 2893 state_mem.go:35] "Initializing new in-memory state store" Jan 20 03:16:15.318201 kubelet[2893]: I0120 03:16:15.317337 2893 state_mem.go:75] "Updated machine memory state" Jan 20 03:16:15.334410 kubelet[2893]: E0120 03:16:15.334107 2893 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 03:16:15.335425 kubelet[2893]: I0120 03:16:15.335407 2893 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 03:16:15.337324 kubelet[2893]: I0120 03:16:15.335747 2893 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 03:16:15.338128 kubelet[2893]: I0120 03:16:15.337854 2893 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 03:16:15.344085 kubelet[2893]: E0120 03:16:15.344040 2893 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 03:16:15.470809 kubelet[2893]: I0120 03:16:15.470103 2893 kubelet_node_status.go:75] "Attempting to register node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.483990 kubelet[2893]: I0120 03:16:15.483966 2893 kubelet_node_status.go:124] "Node was previously registered" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.484206 kubelet[2893]: I0120 03:16:15.484186 2893 kubelet_node_status.go:78] "Successfully registered node" node="srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.510971 kubelet[2893]: I0120 03:16:15.510944 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.511775 kubelet[2893]: I0120 03:16:15.511327 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.512638 kubelet[2893]: I0120 03:16:15.511556 2893 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.527449 kubelet[2893]: I0120 03:16:15.527422 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 03:16:15.530756 kubelet[2893]: I0120 03:16:15.530728 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 03:16:15.531606 kubelet[2893]: I0120 03:16:15.531523 2893 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 20 03:16:15.567969 kubelet[2893]: I0120 03:16:15.567890 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-flexvolume-dir\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.568446 kubelet[2893]: I0120 03:16:15.568298 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-k8s-certs\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.568797 kubelet[2893]: I0120 03:16:15.568533 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-kubeconfig\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.568939 kubelet[2893]: I0120 03:16:15.568899 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.569238 kubelet[2893]: I0120 03:16:15.569176 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/395fabd2fa682771cbbd6806bf561d49-ca-certs\") pod \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" (UID: \"395fabd2fa682771cbbd6806bf561d49\") " pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.569398 kubelet[2893]: I0120 03:16:15.569374 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/395fabd2fa682771cbbd6806bf561d49-k8s-certs\") pod \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" (UID: \"395fabd2fa682771cbbd6806bf561d49\") " pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.569613 kubelet[2893]: I0120 03:16:15.569545 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/868bf72a905aef9cca2d6fd076bb6e00-kubeconfig\") pod \"kube-scheduler-srv-jqch3.gb1.brightbox.com\" (UID: \"868bf72a905aef9cca2d6fd076bb6e00\") " pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.569775 kubelet[2893]: I0120 03:16:15.569752 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/395fabd2fa682771cbbd6806bf561d49-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jqch3.gb1.brightbox.com\" (UID: \"395fabd2fa682771cbbd6806bf561d49\") " pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:15.569955 kubelet[2893]: I0120 03:16:15.569840 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b6395778eb8a0bc5eb8ab2bcb29928b-ca-certs\") pod \"kube-controller-manager-srv-jqch3.gb1.brightbox.com\" (UID: \"3b6395778eb8a0bc5eb8ab2bcb29928b\") " pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" Jan 20 03:16:16.109569 kubelet[2893]: I0120 03:16:16.109514 2893 apiserver.go:52] "Watching apiserver" Jan 20 03:16:16.162165 kubelet[2893]: I0120 03:16:16.162100 2893 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 03:16:16.265442 kubelet[2893]: I0120 03:16:16.265299 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-jqch3.gb1.brightbox.com" podStartSLOduration=1.26517221 podStartE2EDuration="1.26517221s" podCreationTimestamp="2026-01-20 03:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:16:16.259070152 +0000 UTC m=+1.318541185" watchObservedRunningTime="2026-01-20 03:16:16.26517221 +0000 UTC m=+1.324643221" Jan 20 03:16:16.275269 kubelet[2893]: I0120 03:16:16.274729 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-jqch3.gb1.brightbox.com" podStartSLOduration=1.274714803 podStartE2EDuration="1.274714803s" podCreationTimestamp="2026-01-20 03:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:16:16.273270072 +0000 UTC m=+1.332741113" watchObservedRunningTime="2026-01-20 03:16:16.274714803 +0000 UTC m=+1.334185831" Jan 20 03:16:16.285250 kubelet[2893]: I0120 03:16:16.285142 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-jqch3.gb1.brightbox.com" podStartSLOduration=1.285114477 podStartE2EDuration="1.285114477s" podCreationTimestamp="2026-01-20 03:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:16:16.283420395 +0000 UTC m=+1.342891442" watchObservedRunningTime="2026-01-20 03:16:16.285114477 +0000 UTC m=+1.344585501" Jan 20 03:16:20.092446 kubelet[2893]: I0120 03:16:20.092396 2893 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 03:16:20.093728 containerd[1584]: time="2026-01-20T03:16:20.093487440Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 03:16:20.094226 kubelet[2893]: I0120 03:16:20.094076 2893 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 03:16:20.376689 systemd[1]: Created slice kubepods-besteffort-pod637019cf_e0b4_436b_9428_fa6fa9d4821d.slice - libcontainer container kubepods-besteffort-pod637019cf_e0b4_436b_9428_fa6fa9d4821d.slice. Jan 20 03:16:20.399732 kubelet[2893]: I0120 03:16:20.399689 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/637019cf-e0b4-436b-9428-fa6fa9d4821d-kube-proxy\") pod \"kube-proxy-6p4tb\" (UID: \"637019cf-e0b4-436b-9428-fa6fa9d4821d\") " pod="kube-system/kube-proxy-6p4tb" Jan 20 03:16:20.400122 kubelet[2893]: I0120 03:16:20.399832 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/637019cf-e0b4-436b-9428-fa6fa9d4821d-xtables-lock\") pod \"kube-proxy-6p4tb\" (UID: \"637019cf-e0b4-436b-9428-fa6fa9d4821d\") " pod="kube-system/kube-proxy-6p4tb" Jan 20 03:16:20.400122 kubelet[2893]: I0120 03:16:20.399868 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/637019cf-e0b4-436b-9428-fa6fa9d4821d-lib-modules\") pod \"kube-proxy-6p4tb\" (UID: \"637019cf-e0b4-436b-9428-fa6fa9d4821d\") " pod="kube-system/kube-proxy-6p4tb" Jan 20 03:16:20.400122 kubelet[2893]: I0120 03:16:20.399910 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2hgl\" (UniqueName: \"kubernetes.io/projected/637019cf-e0b4-436b-9428-fa6fa9d4821d-kube-api-access-h2hgl\") pod \"kube-proxy-6p4tb\" (UID: \"637019cf-e0b4-436b-9428-fa6fa9d4821d\") " pod="kube-system/kube-proxy-6p4tb" Jan 20 03:16:20.506991 kubelet[2893]: E0120 03:16:20.506930 2893 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 20 03:16:20.506991 kubelet[2893]: E0120 03:16:20.506984 2893 projected.go:194] Error preparing data for projected volume kube-api-access-h2hgl for pod kube-system/kube-proxy-6p4tb: configmap "kube-root-ca.crt" not found Jan 20 03:16:20.507615 kubelet[2893]: E0120 03:16:20.507098 2893 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/637019cf-e0b4-436b-9428-fa6fa9d4821d-kube-api-access-h2hgl podName:637019cf-e0b4-436b-9428-fa6fa9d4821d nodeName:}" failed. No retries permitted until 2026-01-20 03:16:21.007062862 +0000 UTC m=+6.066533872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h2hgl" (UniqueName: "kubernetes.io/projected/637019cf-e0b4-436b-9428-fa6fa9d4821d-kube-api-access-h2hgl") pod "kube-proxy-6p4tb" (UID: "637019cf-e0b4-436b-9428-fa6fa9d4821d") : configmap "kube-root-ca.crt" not found Jan 20 03:16:21.262636 systemd[1]: Created slice kubepods-besteffort-pod099f217b_ef34_4d27_ba91_3d4446cb1ea9.slice - libcontainer container kubepods-besteffort-pod099f217b_ef34_4d27_ba91_3d4446cb1ea9.slice. Jan 20 03:16:21.291622 containerd[1584]: time="2026-01-20T03:16:21.290984894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6p4tb,Uid:637019cf-e0b4-436b-9428-fa6fa9d4821d,Namespace:kube-system,Attempt:0,}" Jan 20 03:16:21.309053 kubelet[2893]: I0120 03:16:21.308661 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prc52\" (UniqueName: \"kubernetes.io/projected/099f217b-ef34-4d27-ba91-3d4446cb1ea9-kube-api-access-prc52\") pod \"tigera-operator-7dcd859c48-rf262\" (UID: \"099f217b-ef34-4d27-ba91-3d4446cb1ea9\") " pod="tigera-operator/tigera-operator-7dcd859c48-rf262" Jan 20 03:16:21.309053 kubelet[2893]: I0120 03:16:21.308837 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/099f217b-ef34-4d27-ba91-3d4446cb1ea9-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rf262\" (UID: \"099f217b-ef34-4d27-ba91-3d4446cb1ea9\") " pod="tigera-operator/tigera-operator-7dcd859c48-rf262" Jan 20 03:16:21.316870 containerd[1584]: time="2026-01-20T03:16:21.316794316Z" level=info msg="connecting to shim c599ccb9069db63b5d49bdc6d1bb3a036ec1a2eff40c64d784c7319f4a95ab69" address="unix:///run/containerd/s/2622620cb34f50d483d2aa9e01f290d78f1c636fc21ff0cccc5a0503d9c8c29c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:21.364812 systemd[1]: Started cri-containerd-c599ccb9069db63b5d49bdc6d1bb3a036ec1a2eff40c64d784c7319f4a95ab69.scope - libcontainer container c599ccb9069db63b5d49bdc6d1bb3a036ec1a2eff40c64d784c7319f4a95ab69. Jan 20 03:16:21.406862 containerd[1584]: time="2026-01-20T03:16:21.406814185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6p4tb,Uid:637019cf-e0b4-436b-9428-fa6fa9d4821d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c599ccb9069db63b5d49bdc6d1bb3a036ec1a2eff40c64d784c7319f4a95ab69\"" Jan 20 03:16:21.415970 containerd[1584]: time="2026-01-20T03:16:21.415728205Z" level=info msg="CreateContainer within sandbox \"c599ccb9069db63b5d49bdc6d1bb3a036ec1a2eff40c64d784c7319f4a95ab69\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 03:16:21.437120 containerd[1584]: time="2026-01-20T03:16:21.437067926Z" level=info msg="Container f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:21.438379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520997727.mount: Deactivated successfully. Jan 20 03:16:21.448867 containerd[1584]: time="2026-01-20T03:16:21.448802989Z" level=info msg="CreateContainer within sandbox \"c599ccb9069db63b5d49bdc6d1bb3a036ec1a2eff40c64d784c7319f4a95ab69\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a\"" Jan 20 03:16:21.450658 containerd[1584]: time="2026-01-20T03:16:21.450237349Z" level=info msg="StartContainer for \"f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a\"" Jan 20 03:16:21.452212 containerd[1584]: time="2026-01-20T03:16:21.452176799Z" level=info msg="connecting to shim f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a" address="unix:///run/containerd/s/2622620cb34f50d483d2aa9e01f290d78f1c636fc21ff0cccc5a0503d9c8c29c" protocol=ttrpc version=3 Jan 20 03:16:21.482851 systemd[1]: Started cri-containerd-f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a.scope - libcontainer container f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a. Jan 20 03:16:21.572408 containerd[1584]: time="2026-01-20T03:16:21.572274604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rf262,Uid:099f217b-ef34-4d27-ba91-3d4446cb1ea9,Namespace:tigera-operator,Attempt:0,}" Jan 20 03:16:21.597930 containerd[1584]: time="2026-01-20T03:16:21.597857682Z" level=info msg="connecting to shim 0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958" address="unix:///run/containerd/s/706e4d92bf3ace29322a829239a22a2300c6c7cbfa9817afe89cbcc11dcd626c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:21.611869 containerd[1584]: time="2026-01-20T03:16:21.611824954Z" level=info msg="StartContainer for \"f4ddbddf36b194ea50a7cd35c268710860bd0099b8d9d9b104df45d614a4b47a\" returns successfully" Jan 20 03:16:21.644848 systemd[1]: Started cri-containerd-0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958.scope - libcontainer container 0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958. Jan 20 03:16:21.719332 containerd[1584]: time="2026-01-20T03:16:21.719204254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rf262,Uid:099f217b-ef34-4d27-ba91-3d4446cb1ea9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958\"" Jan 20 03:16:21.723809 containerd[1584]: time="2026-01-20T03:16:21.723776168Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 03:16:22.120117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3410213028.mount: Deactivated successfully. Jan 20 03:16:22.318010 kubelet[2893]: I0120 03:16:22.317823 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6p4tb" podStartSLOduration=2.317797551 podStartE2EDuration="2.317797551s" podCreationTimestamp="2026-01-20 03:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:16:22.316365424 +0000 UTC m=+7.375836475" watchObservedRunningTime="2026-01-20 03:16:22.317797551 +0000 UTC m=+7.377268585" Jan 20 03:16:23.885279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount216280799.mount: Deactivated successfully. Jan 20 03:16:25.394174 containerd[1584]: time="2026-01-20T03:16:25.393682334Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:25.395807 containerd[1584]: time="2026-01-20T03:16:25.395763912Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 03:16:25.397316 containerd[1584]: time="2026-01-20T03:16:25.397238103Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:25.399755 containerd[1584]: time="2026-01-20T03:16:25.399711752Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:25.401766 containerd[1584]: time="2026-01-20T03:16:25.401720049Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.67790059s" Jan 20 03:16:25.401867 containerd[1584]: time="2026-01-20T03:16:25.401777129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 03:16:25.408256 containerd[1584]: time="2026-01-20T03:16:25.408048982Z" level=info msg="CreateContainer within sandbox \"0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 03:16:25.418113 containerd[1584]: time="2026-01-20T03:16:25.418068749Z" level=info msg="Container 1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:25.433675 containerd[1584]: time="2026-01-20T03:16:25.433571122Z" level=info msg="CreateContainer within sandbox \"0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece\"" Jan 20 03:16:25.436846 containerd[1584]: time="2026-01-20T03:16:25.435340377Z" level=info msg="StartContainer for \"1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece\"" Jan 20 03:16:25.441972 containerd[1584]: time="2026-01-20T03:16:25.441637217Z" level=info msg="connecting to shim 1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece" address="unix:///run/containerd/s/706e4d92bf3ace29322a829239a22a2300c6c7cbfa9817afe89cbcc11dcd626c" protocol=ttrpc version=3 Jan 20 03:16:25.486840 systemd[1]: Started cri-containerd-1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece.scope - libcontainer container 1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece. Jan 20 03:16:25.582786 containerd[1584]: time="2026-01-20T03:16:25.582726883Z" level=info msg="StartContainer for \"1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece\" returns successfully" Jan 20 03:16:29.203461 systemd[1]: cri-containerd-1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece.scope: Deactivated successfully. Jan 20 03:16:29.280042 containerd[1584]: time="2026-01-20T03:16:29.279936790Z" level=info msg="received container exit event container_id:\"1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece\" id:\"1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece\" pid:3218 exit_status:1 exited_at:{seconds:1768878989 nanos:205052769}" Jan 20 03:16:29.365941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece-rootfs.mount: Deactivated successfully. Jan 20 03:16:30.311662 kubelet[2893]: I0120 03:16:30.311624 2893 scope.go:117] "RemoveContainer" containerID="1cc52442cfd74864fd67a47f49a884a37b99b4219b478ca37bce3d772d3f3ece" Jan 20 03:16:30.320167 containerd[1584]: time="2026-01-20T03:16:30.320045227Z" level=info msg="CreateContainer within sandbox \"0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 03:16:30.340547 containerd[1584]: time="2026-01-20T03:16:30.339803388Z" level=info msg="Container a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:30.349076 containerd[1584]: time="2026-01-20T03:16:30.349036654Z" level=info msg="CreateContainer within sandbox \"0e9d331d9c37ac4c0eb9400730d14bcd0ab39388e03436ce9b359b47c2b42958\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16\"" Jan 20 03:16:30.350225 containerd[1584]: time="2026-01-20T03:16:30.350185538Z" level=info msg="StartContainer for \"a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16\"" Jan 20 03:16:30.353075 containerd[1584]: time="2026-01-20T03:16:30.352847152Z" level=info msg="connecting to shim a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16" address="unix:///run/containerd/s/706e4d92bf3ace29322a829239a22a2300c6c7cbfa9817afe89cbcc11dcd626c" protocol=ttrpc version=3 Jan 20 03:16:30.394763 systemd[1]: Started cri-containerd-a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16.scope - libcontainer container a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16. Jan 20 03:16:30.483607 containerd[1584]: time="2026-01-20T03:16:30.483436377Z" level=info msg="StartContainer for \"a0564a321536fce3bb65250600d9be12147acd4daa682cf5f17a6766202eed16\" returns successfully" Jan 20 03:16:31.326134 kubelet[2893]: I0120 03:16:31.325590 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rf262" podStartSLOduration=6.6441379640000005 podStartE2EDuration="10.325571091s" podCreationTimestamp="2026-01-20 03:16:21 +0000 UTC" firstStartedPulling="2026-01-20 03:16:21.721562034 +0000 UTC m=+6.781033039" lastFinishedPulling="2026-01-20 03:16:25.402995161 +0000 UTC m=+10.462466166" observedRunningTime="2026-01-20 03:16:26.307377643 +0000 UTC m=+11.366848687" watchObservedRunningTime="2026-01-20 03:16:31.325571091 +0000 UTC m=+16.385042109" Jan 20 03:16:32.909257 sudo[1868]: pam_unix(sudo:session): session closed for user root Jan 20 03:16:33.005608 sshd[1867]: Connection closed by 20.161.92.111 port 36152 Jan 20 03:16:33.005690 sshd-session[1864]: pam_unix(sshd:session): session closed for user core Jan 20 03:16:33.011095 systemd[1]: sshd@8-10.230.49.118:22-20.161.92.111:36152.service: Deactivated successfully. Jan 20 03:16:33.014227 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 03:16:33.014833 systemd[1]: session-11.scope: Consumed 7.216s CPU time, 155.8M memory peak. Jan 20 03:16:33.018034 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Jan 20 03:16:33.020378 systemd-logind[1564]: Removed session 11. Jan 20 03:16:35.022809 systemd[1]: Started sshd@11-10.230.49.118:22-164.92.217.44:60874.service - OpenSSH per-connection server daemon (164.92.217.44:60874). Jan 20 03:16:35.383578 sshd[3329]: Invalid user search from 164.92.217.44 port 60874 Jan 20 03:16:35.465415 sshd[3329]: Connection closed by invalid user search 164.92.217.44 port 60874 [preauth] Jan 20 03:16:35.469610 systemd[1]: sshd@11-10.230.49.118:22-164.92.217.44:60874.service: Deactivated successfully. Jan 20 03:16:41.404363 systemd[1]: Created slice kubepods-besteffort-pod00f06997_583c_4319_8e3b_7107c6d675a1.slice - libcontainer container kubepods-besteffort-pod00f06997_583c_4319_8e3b_7107c6d675a1.slice. Jan 20 03:16:41.444112 kubelet[2893]: I0120 03:16:41.444060 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00f06997-583c-4319-8e3b-7107c6d675a1-tigera-ca-bundle\") pod \"calico-typha-64559bb65b-jrwd5\" (UID: \"00f06997-583c-4319-8e3b-7107c6d675a1\") " pod="calico-system/calico-typha-64559bb65b-jrwd5" Jan 20 03:16:41.444774 kubelet[2893]: I0120 03:16:41.444123 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxkph\" (UniqueName: \"kubernetes.io/projected/00f06997-583c-4319-8e3b-7107c6d675a1-kube-api-access-sxkph\") pod \"calico-typha-64559bb65b-jrwd5\" (UID: \"00f06997-583c-4319-8e3b-7107c6d675a1\") " pod="calico-system/calico-typha-64559bb65b-jrwd5" Jan 20 03:16:41.444774 kubelet[2893]: I0120 03:16:41.444200 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/00f06997-583c-4319-8e3b-7107c6d675a1-typha-certs\") pod \"calico-typha-64559bb65b-jrwd5\" (UID: \"00f06997-583c-4319-8e3b-7107c6d675a1\") " pod="calico-system/calico-typha-64559bb65b-jrwd5" Jan 20 03:16:41.665291 systemd[1]: Created slice kubepods-besteffort-pod879416d1_ee9f_4ec6_a0e9_e0d10b9a5c15.slice - libcontainer container kubepods-besteffort-pod879416d1_ee9f_4ec6_a0e9_e0d10b9a5c15.slice. Jan 20 03:16:41.716982 containerd[1584]: time="2026-01-20T03:16:41.716891112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64559bb65b-jrwd5,Uid:00f06997-583c-4319-8e3b-7107c6d675a1,Namespace:calico-system,Attempt:0,}" Jan 20 03:16:41.747459 kubelet[2893]: I0120 03:16:41.747319 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-policysync\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.748187 kubelet[2893]: I0120 03:16:41.747790 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-cni-bin-dir\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.748187 kubelet[2893]: I0120 03:16:41.747918 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-lib-modules\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.748826 kubelet[2893]: I0120 03:16:41.748745 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-node-certs\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749484 kubelet[2893]: I0120 03:16:41.749054 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-tigera-ca-bundle\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749484 kubelet[2893]: I0120 03:16:41.749088 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-cni-net-dir\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749484 kubelet[2893]: I0120 03:16:41.749120 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-var-lib-calico\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749484 kubelet[2893]: I0120 03:16:41.749145 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-var-run-calico\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749484 kubelet[2893]: I0120 03:16:41.749229 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2x2\" (UniqueName: \"kubernetes.io/projected/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-kube-api-access-2f2x2\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749806 kubelet[2893]: I0120 03:16:41.749265 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-cni-log-dir\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749806 kubelet[2893]: I0120 03:16:41.749290 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-flexvol-driver-host\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.749806 kubelet[2893]: I0120 03:16:41.749323 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15-xtables-lock\") pod \"calico-node-k9wfd\" (UID: \"879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15\") " pod="calico-system/calico-node-k9wfd" Jan 20 03:16:41.771806 containerd[1584]: time="2026-01-20T03:16:41.771758937Z" level=info msg="connecting to shim e2846f133b73299e6a1ec926d53bb7c9863774b65d1884e8bcdd303d906beb42" address="unix:///run/containerd/s/d110e9f9cfbfd64a8547832a3e296b67f028ded26e226f53b434e055d9ac2808" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:41.799044 kubelet[2893]: E0120 03:16:41.798986 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:41.852014 kubelet[2893]: I0120 03:16:41.850271 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmsdb\" (UniqueName: \"kubernetes.io/projected/4233551d-98b7-48f5-b9e1-45373c718e78-kube-api-access-nmsdb\") pod \"csi-node-driver-vfgpq\" (UID: \"4233551d-98b7-48f5-b9e1-45373c718e78\") " pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:41.852014 kubelet[2893]: I0120 03:16:41.850369 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4233551d-98b7-48f5-b9e1-45373c718e78-registration-dir\") pod \"csi-node-driver-vfgpq\" (UID: \"4233551d-98b7-48f5-b9e1-45373c718e78\") " pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:41.852014 kubelet[2893]: I0120 03:16:41.850407 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4233551d-98b7-48f5-b9e1-45373c718e78-varrun\") pod \"csi-node-driver-vfgpq\" (UID: \"4233551d-98b7-48f5-b9e1-45373c718e78\") " pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:41.852014 kubelet[2893]: I0120 03:16:41.850446 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4233551d-98b7-48f5-b9e1-45373c718e78-kubelet-dir\") pod \"csi-node-driver-vfgpq\" (UID: \"4233551d-98b7-48f5-b9e1-45373c718e78\") " pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:41.852014 kubelet[2893]: I0120 03:16:41.850517 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4233551d-98b7-48f5-b9e1-45373c718e78-socket-dir\") pod \"csi-node-driver-vfgpq\" (UID: \"4233551d-98b7-48f5-b9e1-45373c718e78\") " pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:41.868839 kubelet[2893]: E0120 03:16:41.868793 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.869014 kubelet[2893]: W0120 03:16:41.868987 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.871629 kubelet[2893]: E0120 03:16:41.870541 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.889516 systemd[1]: Started cri-containerd-e2846f133b73299e6a1ec926d53bb7c9863774b65d1884e8bcdd303d906beb42.scope - libcontainer container e2846f133b73299e6a1ec926d53bb7c9863774b65d1884e8bcdd303d906beb42. Jan 20 03:16:41.897339 kubelet[2893]: E0120 03:16:41.897296 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.897529 kubelet[2893]: W0120 03:16:41.897505 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.897751 kubelet[2893]: E0120 03:16:41.897725 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.952414 kubelet[2893]: E0120 03:16:41.952273 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.952414 kubelet[2893]: W0120 03:16:41.952326 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.952414 kubelet[2893]: E0120 03:16:41.952356 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.953407 kubelet[2893]: E0120 03:16:41.953386 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.953469 kubelet[2893]: W0120 03:16:41.953422 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.953469 kubelet[2893]: E0120 03:16:41.953439 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.954071 kubelet[2893]: E0120 03:16:41.954049 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.954071 kubelet[2893]: W0120 03:16:41.954069 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.954348 kubelet[2893]: E0120 03:16:41.954085 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.954818 kubelet[2893]: E0120 03:16:41.954794 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.954818 kubelet[2893]: W0120 03:16:41.954813 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.954944 kubelet[2893]: E0120 03:16:41.954828 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.955178 kubelet[2893]: E0120 03:16:41.955158 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.955178 kubelet[2893]: W0120 03:16:41.955176 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.955289 kubelet[2893]: E0120 03:16:41.955191 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.955716 kubelet[2893]: E0120 03:16:41.955693 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.955716 kubelet[2893]: W0120 03:16:41.955711 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.955829 kubelet[2893]: E0120 03:16:41.955726 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.956560 kubelet[2893]: E0120 03:16:41.956540 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.956660 kubelet[2893]: W0120 03:16:41.956558 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.956660 kubelet[2893]: E0120 03:16:41.956602 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.959647 kubelet[2893]: E0120 03:16:41.959625 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.959647 kubelet[2893]: W0120 03:16:41.959643 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.959779 kubelet[2893]: E0120 03:16:41.959658 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.959983 kubelet[2893]: E0120 03:16:41.959962 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.959983 kubelet[2893]: W0120 03:16:41.959980 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.960267 kubelet[2893]: E0120 03:16:41.959996 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.960433 kubelet[2893]: E0120 03:16:41.960400 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.960433 kubelet[2893]: W0120 03:16:41.960417 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.960433 kubelet[2893]: E0120 03:16:41.960432 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.960851 kubelet[2893]: E0120 03:16:41.960830 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.960913 kubelet[2893]: W0120 03:16:41.960848 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.960913 kubelet[2893]: E0120 03:16:41.960901 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.961216 kubelet[2893]: E0120 03:16:41.961172 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.961216 kubelet[2893]: W0120 03:16:41.961200 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.961216 kubelet[2893]: E0120 03:16:41.961215 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.961653 kubelet[2893]: E0120 03:16:41.961619 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.961653 kubelet[2893]: W0120 03:16:41.961648 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.961782 kubelet[2893]: E0120 03:16:41.961661 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.962123 kubelet[2893]: E0120 03:16:41.962102 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.962123 kubelet[2893]: W0120 03:16:41.962121 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.962235 kubelet[2893]: E0120 03:16:41.962137 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.962533 kubelet[2893]: E0120 03:16:41.962514 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.962533 kubelet[2893]: W0120 03:16:41.962531 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.962695 kubelet[2893]: E0120 03:16:41.962548 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.962851 kubelet[2893]: E0120 03:16:41.962825 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.962851 kubelet[2893]: W0120 03:16:41.962843 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.963704 kubelet[2893]: E0120 03:16:41.962857 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.963704 kubelet[2893]: E0120 03:16:41.963139 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.963704 kubelet[2893]: W0120 03:16:41.963151 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.963704 kubelet[2893]: E0120 03:16:41.963165 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.963704 kubelet[2893]: E0120 03:16:41.963463 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.963704 kubelet[2893]: W0120 03:16:41.963476 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.963704 kubelet[2893]: E0120 03:16:41.963488 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.964080 kubelet[2893]: E0120 03:16:41.963773 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.964080 kubelet[2893]: W0120 03:16:41.963786 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.964080 kubelet[2893]: E0120 03:16:41.963799 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.964278 kubelet[2893]: E0120 03:16:41.964111 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.964278 kubelet[2893]: W0120 03:16:41.964124 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.964278 kubelet[2893]: E0120 03:16:41.964137 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.964944 kubelet[2893]: E0120 03:16:41.964891 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.964944 kubelet[2893]: W0120 03:16:41.964912 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.964944 kubelet[2893]: E0120 03:16:41.964927 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.965191 kubelet[2893]: E0120 03:16:41.965169 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.965191 kubelet[2893]: W0120 03:16:41.965186 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.965499 kubelet[2893]: E0120 03:16:41.965201 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.965873 kubelet[2893]: E0120 03:16:41.965854 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.965873 kubelet[2893]: W0120 03:16:41.965871 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.965990 kubelet[2893]: E0120 03:16:41.965886 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.966461 kubelet[2893]: E0120 03:16:41.966442 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.966461 kubelet[2893]: W0120 03:16:41.966459 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.966572 kubelet[2893]: E0120 03:16:41.966476 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.967042 kubelet[2893]: E0120 03:16:41.966986 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.967042 kubelet[2893]: W0120 03:16:41.967009 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.967155 kubelet[2893]: E0120 03:16:41.967049 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:41.972552 containerd[1584]: time="2026-01-20T03:16:41.972512867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k9wfd,Uid:879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15,Namespace:calico-system,Attempt:0,}" Jan 20 03:16:41.987033 kubelet[2893]: E0120 03:16:41.986991 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:41.987146 kubelet[2893]: W0120 03:16:41.987046 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:41.987146 kubelet[2893]: E0120 03:16:41.987068 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:42.005274 containerd[1584]: time="2026-01-20T03:16:42.005192855Z" level=info msg="connecting to shim 86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205" address="unix:///run/containerd/s/291af159a5f3b70a3a6206bc424028379bdaf5aa35ae7a4cce85cf1411eedc35" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:16:42.025163 containerd[1584]: time="2026-01-20T03:16:42.025118914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64559bb65b-jrwd5,Uid:00f06997-583c-4319-8e3b-7107c6d675a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2846f133b73299e6a1ec926d53bb7c9863774b65d1884e8bcdd303d906beb42\"" Jan 20 03:16:42.028392 containerd[1584]: time="2026-01-20T03:16:42.028335039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 03:16:42.048812 systemd[1]: Started cri-containerd-86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205.scope - libcontainer container 86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205. Jan 20 03:16:42.090903 containerd[1584]: time="2026-01-20T03:16:42.090781068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k9wfd,Uid:879416d1-ee9f-4ec6-a0e9-e0d10b9a5c15,Namespace:calico-system,Attempt:0,} returns sandbox id \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\"" Jan 20 03:16:43.211191 kubelet[2893]: E0120 03:16:43.211119 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:43.514314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296606239.mount: Deactivated successfully. Jan 20 03:16:45.014627 containerd[1584]: time="2026-01-20T03:16:45.014541289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:45.016032 containerd[1584]: time="2026-01-20T03:16:45.015989189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 03:16:45.016557 containerd[1584]: time="2026-01-20T03:16:45.016516338Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:45.019040 containerd[1584]: time="2026-01-20T03:16:45.018765337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:45.019802 containerd[1584]: time="2026-01-20T03:16:45.019760661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.991354074s" Jan 20 03:16:45.019964 containerd[1584]: time="2026-01-20T03:16:45.019915759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 03:16:45.021730 containerd[1584]: time="2026-01-20T03:16:45.021695327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 03:16:45.065447 containerd[1584]: time="2026-01-20T03:16:45.063827204Z" level=info msg="CreateContainer within sandbox \"e2846f133b73299e6a1ec926d53bb7c9863774b65d1884e8bcdd303d906beb42\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 03:16:45.074779 containerd[1584]: time="2026-01-20T03:16:45.074751238Z" level=info msg="Container 5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:45.086404 containerd[1584]: time="2026-01-20T03:16:45.086369487Z" level=info msg="CreateContainer within sandbox \"e2846f133b73299e6a1ec926d53bb7c9863774b65d1884e8bcdd303d906beb42\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477\"" Jan 20 03:16:45.087266 containerd[1584]: time="2026-01-20T03:16:45.087229827Z" level=info msg="StartContainer for \"5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477\"" Jan 20 03:16:45.089004 containerd[1584]: time="2026-01-20T03:16:45.088788826Z" level=info msg="connecting to shim 5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477" address="unix:///run/containerd/s/d110e9f9cfbfd64a8547832a3e296b67f028ded26e226f53b434e055d9ac2808" protocol=ttrpc version=3 Jan 20 03:16:45.120803 systemd[1]: Started cri-containerd-5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477.scope - libcontainer container 5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477. Jan 20 03:16:45.196316 containerd[1584]: time="2026-01-20T03:16:45.196241250Z" level=info msg="StartContainer for \"5865402cea2a45945d55314ed6d1116e3a72babfb32c52d2aa09e089108d0477\" returns successfully" Jan 20 03:16:45.212040 kubelet[2893]: E0120 03:16:45.211895 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:45.465261 kubelet[2893]: E0120 03:16:45.465216 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.465261 kubelet[2893]: W0120 03:16:45.465249 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.465479 kubelet[2893]: E0120 03:16:45.465289 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.466081 kubelet[2893]: E0120 03:16:45.465741 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.466081 kubelet[2893]: W0120 03:16:45.465755 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.466081 kubelet[2893]: E0120 03:16:45.465772 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.466081 kubelet[2893]: E0120 03:16:45.466067 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.466081 kubelet[2893]: W0120 03:16:45.466080 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.466350 kubelet[2893]: E0120 03:16:45.466095 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.468708 kubelet[2893]: E0120 03:16:45.468674 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.468708 kubelet[2893]: W0120 03:16:45.468693 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.468708 kubelet[2893]: E0120 03:16:45.468709 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.468976 kubelet[2893]: E0120 03:16:45.468956 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.468976 kubelet[2893]: W0120 03:16:45.468974 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.469660 kubelet[2893]: E0120 03:16:45.468987 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.469660 kubelet[2893]: E0120 03:16:45.469198 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.469660 kubelet[2893]: W0120 03:16:45.469218 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.469660 kubelet[2893]: E0120 03:16:45.469232 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.469660 kubelet[2893]: E0120 03:16:45.469430 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.469660 kubelet[2893]: W0120 03:16:45.469453 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.469660 kubelet[2893]: E0120 03:16:45.469465 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.470226 kubelet[2893]: E0120 03:16:45.469697 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.470226 kubelet[2893]: W0120 03:16:45.469710 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.470226 kubelet[2893]: E0120 03:16:45.469732 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.470226 kubelet[2893]: E0120 03:16:45.469942 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.470226 kubelet[2893]: W0120 03:16:45.469954 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.470226 kubelet[2893]: E0120 03:16:45.469967 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.470721 kubelet[2893]: E0120 03:16:45.470700 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.470721 kubelet[2893]: W0120 03:16:45.470717 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.470854 kubelet[2893]: E0120 03:16:45.470732 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.470975 kubelet[2893]: E0120 03:16:45.470946 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.470975 kubelet[2893]: W0120 03:16:45.470964 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.472151 kubelet[2893]: E0120 03:16:45.470978 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.472151 kubelet[2893]: E0120 03:16:45.472104 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.472151 kubelet[2893]: W0120 03:16:45.472117 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.472151 kubelet[2893]: E0120 03:16:45.472131 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.472856 kubelet[2893]: E0120 03:16:45.472369 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.472856 kubelet[2893]: W0120 03:16:45.472381 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.472856 kubelet[2893]: E0120 03:16:45.472403 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.473349 kubelet[2893]: E0120 03:16:45.473323 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.473349 kubelet[2893]: W0120 03:16:45.473345 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.473518 kubelet[2893]: E0120 03:16:45.473373 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.474387 kubelet[2893]: E0120 03:16:45.474367 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.474387 kubelet[2893]: W0120 03:16:45.474384 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.474528 kubelet[2893]: E0120 03:16:45.474397 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.488389 kubelet[2893]: E0120 03:16:45.488358 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.488389 kubelet[2893]: W0120 03:16:45.488381 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.488695 kubelet[2893]: E0120 03:16:45.488401 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.490346 kubelet[2893]: E0120 03:16:45.490324 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.490485 kubelet[2893]: W0120 03:16:45.490361 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.490485 kubelet[2893]: E0120 03:16:45.490380 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.491025 kubelet[2893]: E0120 03:16:45.490997 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.491025 kubelet[2893]: W0120 03:16:45.491011 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.491025 kubelet[2893]: E0120 03:16:45.491025 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.491563 kubelet[2893]: E0120 03:16:45.491540 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.491563 kubelet[2893]: W0120 03:16:45.491557 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.491752 kubelet[2893]: E0120 03:16:45.491574 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.492860 kubelet[2893]: E0120 03:16:45.492839 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.492960 kubelet[2893]: W0120 03:16:45.492872 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.492960 kubelet[2893]: E0120 03:16:45.492890 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.493194 kubelet[2893]: E0120 03:16:45.493175 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.493268 kubelet[2893]: W0120 03:16:45.493208 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.493268 kubelet[2893]: E0120 03:16:45.493224 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.493499 kubelet[2893]: E0120 03:16:45.493482 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.493499 kubelet[2893]: W0120 03:16:45.493498 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.493710 kubelet[2893]: E0120 03:16:45.493529 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.493975 kubelet[2893]: E0120 03:16:45.493909 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.493975 kubelet[2893]: W0120 03:16:45.493926 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.493975 kubelet[2893]: E0120 03:16:45.493950 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.502228 kubelet[2893]: E0120 03:16:45.502204 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.502324 kubelet[2893]: W0120 03:16:45.502224 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.502324 kubelet[2893]: E0120 03:16:45.502260 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.502880 kubelet[2893]: E0120 03:16:45.502570 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.502880 kubelet[2893]: W0120 03:16:45.502608 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.502880 kubelet[2893]: E0120 03:16:45.502637 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.504022 kubelet[2893]: E0120 03:16:45.503993 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.504022 kubelet[2893]: W0120 03:16:45.504012 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.504229 kubelet[2893]: E0120 03:16:45.504038 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.504337 kubelet[2893]: E0120 03:16:45.504263 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.504337 kubelet[2893]: W0120 03:16:45.504276 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.504337 kubelet[2893]: E0120 03:16:45.504301 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.506665 kubelet[2893]: E0120 03:16:45.504561 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.506665 kubelet[2893]: W0120 03:16:45.504579 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.506665 kubelet[2893]: E0120 03:16:45.504608 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.506665 kubelet[2893]: E0120 03:16:45.505354 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.506665 kubelet[2893]: W0120 03:16:45.505379 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.506665 kubelet[2893]: E0120 03:16:45.505576 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.506984 kubelet[2893]: E0120 03:16:45.506877 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.506984 kubelet[2893]: W0120 03:16:45.506890 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.506984 kubelet[2893]: E0120 03:16:45.506904 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.507259 kubelet[2893]: E0120 03:16:45.507236 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.507259 kubelet[2893]: W0120 03:16:45.507256 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.507378 kubelet[2893]: E0120 03:16:45.507271 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.507561 kubelet[2893]: E0120 03:16:45.507534 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.507561 kubelet[2893]: W0120 03:16:45.507553 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.507742 kubelet[2893]: E0120 03:16:45.507567 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:45.509802 kubelet[2893]: E0120 03:16:45.509766 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:45.509802 kubelet[2893]: W0120 03:16:45.509786 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:45.509918 kubelet[2893]: E0120 03:16:45.509816 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.378414 kubelet[2893]: I0120 03:16:46.378014 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:16:46.379982 kubelet[2893]: E0120 03:16:46.379954 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.379982 kubelet[2893]: W0120 03:16:46.379980 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.380103 kubelet[2893]: E0120 03:16:46.380021 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.380330 kubelet[2893]: E0120 03:16:46.380311 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.380392 kubelet[2893]: W0120 03:16:46.380345 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.380392 kubelet[2893]: E0120 03:16:46.380363 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.380869 kubelet[2893]: E0120 03:16:46.380848 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.380939 kubelet[2893]: W0120 03:16:46.380896 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.380939 kubelet[2893]: E0120 03:16:46.380927 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.381408 kubelet[2893]: E0120 03:16:46.381308 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.381478 kubelet[2893]: W0120 03:16:46.381409 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.381478 kubelet[2893]: E0120 03:16:46.381427 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.381876 kubelet[2893]: E0120 03:16:46.381857 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.381956 kubelet[2893]: W0120 03:16:46.381874 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.381956 kubelet[2893]: E0120 03:16:46.381898 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.382318 kubelet[2893]: E0120 03:16:46.382299 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.382398 kubelet[2893]: W0120 03:16:46.382317 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.382398 kubelet[2893]: E0120 03:16:46.382338 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.382787 kubelet[2893]: E0120 03:16:46.382768 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.382787 kubelet[2893]: W0120 03:16:46.382786 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.382923 kubelet[2893]: E0120 03:16:46.382801 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.383178 kubelet[2893]: E0120 03:16:46.383159 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.383178 kubelet[2893]: W0120 03:16:46.383176 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.383282 kubelet[2893]: E0120 03:16:46.383191 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.383532 kubelet[2893]: E0120 03:16:46.383513 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.383532 kubelet[2893]: W0120 03:16:46.383530 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.383681 kubelet[2893]: E0120 03:16:46.383570 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.383969 kubelet[2893]: E0120 03:16:46.383949 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.383969 kubelet[2893]: W0120 03:16:46.383967 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.384073 kubelet[2893]: E0120 03:16:46.383983 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.384215 kubelet[2893]: E0120 03:16:46.384197 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.384215 kubelet[2893]: W0120 03:16:46.384214 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.384320 kubelet[2893]: E0120 03:16:46.384228 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.384555 kubelet[2893]: E0120 03:16:46.384537 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.384555 kubelet[2893]: W0120 03:16:46.384553 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.384714 kubelet[2893]: E0120 03:16:46.384568 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.384863 kubelet[2893]: E0120 03:16:46.384845 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.384863 kubelet[2893]: W0120 03:16:46.384863 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.384979 kubelet[2893]: E0120 03:16:46.384884 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.385508 kubelet[2893]: E0120 03:16:46.385486 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.385508 kubelet[2893]: W0120 03:16:46.385507 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.385631 kubelet[2893]: E0120 03:16:46.385527 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.385966 kubelet[2893]: E0120 03:16:46.385946 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.386046 kubelet[2893]: W0120 03:16:46.385987 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.386046 kubelet[2893]: E0120 03:16:46.386005 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.398615 kubelet[2893]: E0120 03:16:46.398550 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.398615 kubelet[2893]: W0120 03:16:46.398657 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.398615 kubelet[2893]: E0120 03:16:46.398711 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.399378 kubelet[2893]: E0120 03:16:46.399357 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.399497 kubelet[2893]: W0120 03:16:46.399471 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.399793 kubelet[2893]: E0120 03:16:46.399728 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.402060 kubelet[2893]: E0120 03:16:46.402026 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.402144 kubelet[2893]: W0120 03:16:46.402054 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.402144 kubelet[2893]: E0120 03:16:46.402085 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.403467 kubelet[2893]: E0120 03:16:46.403442 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.403467 kubelet[2893]: W0120 03:16:46.403461 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.403664 kubelet[2893]: E0120 03:16:46.403477 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.406462 kubelet[2893]: E0120 03:16:46.405687 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.406462 kubelet[2893]: W0120 03:16:46.405747 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.406462 kubelet[2893]: E0120 03:16:46.405782 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.406462 kubelet[2893]: E0120 03:16:46.406142 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.406462 kubelet[2893]: W0120 03:16:46.406157 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.406462 kubelet[2893]: E0120 03:16:46.406172 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.406815 kubelet[2893]: E0120 03:16:46.406499 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.406815 kubelet[2893]: W0120 03:16:46.406513 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.406815 kubelet[2893]: E0120 03:16:46.406527 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.406989 kubelet[2893]: E0120 03:16:46.406921 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.406989 kubelet[2893]: W0120 03:16:46.406936 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.407011 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.407472 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.408925 kubelet[2893]: W0120 03:16:46.407485 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.407499 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.408326 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.408925 kubelet[2893]: W0120 03:16:46.408342 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.408356 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.408737 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.408925 kubelet[2893]: W0120 03:16:46.408752 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.408925 kubelet[2893]: E0120 03:16:46.408766 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.410375 kubelet[2893]: E0120 03:16:46.409005 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.410375 kubelet[2893]: W0120 03:16:46.409018 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.410375 kubelet[2893]: E0120 03:16:46.409031 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.410375 kubelet[2893]: E0120 03:16:46.409320 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.410375 kubelet[2893]: W0120 03:16:46.409332 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.410375 kubelet[2893]: E0120 03:16:46.409345 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.410375 kubelet[2893]: E0120 03:16:46.409964 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.410375 kubelet[2893]: W0120 03:16:46.409983 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.410375 kubelet[2893]: E0120 03:16:46.410001 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.411176 kubelet[2893]: E0120 03:16:46.410719 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.411176 kubelet[2893]: W0120 03:16:46.410742 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.411176 kubelet[2893]: E0120 03:16:46.410758 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.411669 kubelet[2893]: E0120 03:16:46.411502 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.411669 kubelet[2893]: W0120 03:16:46.411520 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.411669 kubelet[2893]: E0120 03:16:46.411540 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.412073 kubelet[2893]: E0120 03:16:46.411954 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.412073 kubelet[2893]: W0120 03:16:46.411973 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.412073 kubelet[2893]: E0120 03:16:46.411988 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.412519 kubelet[2893]: E0120 03:16:46.412496 2893 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 03:16:46.412519 kubelet[2893]: W0120 03:16:46.412515 2893 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 03:16:46.412651 kubelet[2893]: E0120 03:16:46.412531 2893 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 03:16:46.783439 containerd[1584]: time="2026-01-20T03:16:46.783337327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:46.784989 containerd[1584]: time="2026-01-20T03:16:46.784764195Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 03:16:46.785856 containerd[1584]: time="2026-01-20T03:16:46.785804224Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:46.791993 containerd[1584]: time="2026-01-20T03:16:46.791939560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:46.793007 containerd[1584]: time="2026-01-20T03:16:46.792972380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.771235904s" Jan 20 03:16:46.793341 containerd[1584]: time="2026-01-20T03:16:46.793118402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 03:16:46.800776 containerd[1584]: time="2026-01-20T03:16:46.800550625Z" level=info msg="CreateContainer within sandbox \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 03:16:46.816797 containerd[1584]: time="2026-01-20T03:16:46.816742608Z" level=info msg="Container 1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:46.831020 containerd[1584]: time="2026-01-20T03:16:46.830973142Z" level=info msg="CreateContainer within sandbox \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f\"" Jan 20 03:16:46.832240 containerd[1584]: time="2026-01-20T03:16:46.832209997Z" level=info msg="StartContainer for \"1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f\"" Jan 20 03:16:46.835923 containerd[1584]: time="2026-01-20T03:16:46.835803955Z" level=info msg="connecting to shim 1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f" address="unix:///run/containerd/s/291af159a5f3b70a3a6206bc424028379bdaf5aa35ae7a4cce85cf1411eedc35" protocol=ttrpc version=3 Jan 20 03:16:46.879848 systemd[1]: Started cri-containerd-1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f.scope - libcontainer container 1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f. Jan 20 03:16:46.982990 containerd[1584]: time="2026-01-20T03:16:46.982853966Z" level=info msg="StartContainer for \"1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f\" returns successfully" Jan 20 03:16:47.008594 systemd[1]: cri-containerd-1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f.scope: Deactivated successfully. Jan 20 03:16:47.014937 containerd[1584]: time="2026-01-20T03:16:47.014582914Z" level=info msg="received container exit event container_id:\"1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f\" id:\"1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f\" pid:3615 exited_at:{seconds:1768879007 nanos:13744709}" Jan 20 03:16:47.049495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1381b94f2bbb43836aee2ce6d76ef588b9f0d365ab23db2a9c9c3afea0e5a47f-rootfs.mount: Deactivated successfully. Jan 20 03:16:47.211551 kubelet[2893]: E0120 03:16:47.210364 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:47.385542 containerd[1584]: time="2026-01-20T03:16:47.385070779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 03:16:47.409608 kubelet[2893]: I0120 03:16:47.407618 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64559bb65b-jrwd5" podStartSLOduration=3.413962831 podStartE2EDuration="6.407577591s" podCreationTimestamp="2026-01-20 03:16:41 +0000 UTC" firstStartedPulling="2026-01-20 03:16:42.02776019 +0000 UTC m=+27.087231195" lastFinishedPulling="2026-01-20 03:16:45.021374932 +0000 UTC m=+30.080845955" observedRunningTime="2026-01-20 03:16:45.428695095 +0000 UTC m=+30.488166136" watchObservedRunningTime="2026-01-20 03:16:47.407577591 +0000 UTC m=+32.467048611" Jan 20 03:16:49.211672 kubelet[2893]: E0120 03:16:49.211288 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:51.216476 kubelet[2893]: E0120 03:16:51.216405 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:53.085663 containerd[1584]: time="2026-01-20T03:16:53.085560269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:53.087630 containerd[1584]: time="2026-01-20T03:16:53.087565914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 03:16:53.090167 containerd[1584]: time="2026-01-20T03:16:53.089686708Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:53.091981 containerd[1584]: time="2026-01-20T03:16:53.091904748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:16:53.093915 containerd[1584]: time="2026-01-20T03:16:53.093504735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.708377981s" Jan 20 03:16:53.093915 containerd[1584]: time="2026-01-20T03:16:53.093552534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 03:16:53.099237 containerd[1584]: time="2026-01-20T03:16:53.099202776Z" level=info msg="CreateContainer within sandbox \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 03:16:53.130163 containerd[1584]: time="2026-01-20T03:16:53.128803941Z" level=info msg="Container 5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:16:53.134715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434753139.mount: Deactivated successfully. Jan 20 03:16:53.147847 containerd[1584]: time="2026-01-20T03:16:53.147610087Z" level=info msg="CreateContainer within sandbox \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6\"" Jan 20 03:16:53.149128 containerd[1584]: time="2026-01-20T03:16:53.149081496Z" level=info msg="StartContainer for \"5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6\"" Jan 20 03:16:53.158814 containerd[1584]: time="2026-01-20T03:16:53.158744810Z" level=info msg="connecting to shim 5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6" address="unix:///run/containerd/s/291af159a5f3b70a3a6206bc424028379bdaf5aa35ae7a4cce85cf1411eedc35" protocol=ttrpc version=3 Jan 20 03:16:53.192791 systemd[1]: Started cri-containerd-5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6.scope - libcontainer container 5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6. Jan 20 03:16:53.210265 kubelet[2893]: E0120 03:16:53.210209 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:16:53.359701 containerd[1584]: time="2026-01-20T03:16:53.359318553Z" level=info msg="StartContainer for \"5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6\" returns successfully" Jan 20 03:16:54.431258 systemd[1]: cri-containerd-5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6.scope: Deactivated successfully. Jan 20 03:16:54.433160 systemd[1]: cri-containerd-5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6.scope: Consumed 828ms CPU time, 172.9M memory peak, 6.8M read from disk, 171.3M written to disk. Jan 20 03:16:54.494942 containerd[1584]: time="2026-01-20T03:16:54.494783681Z" level=info msg="received container exit event container_id:\"5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6\" id:\"5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6\" pid:3676 exited_at:{seconds:1768879014 nanos:493913071}" Jan 20 03:16:54.506724 kubelet[2893]: I0120 03:16:54.499098 2893 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 03:16:54.582340 systemd[1]: Created slice kubepods-burstable-podee08e93c_1ce2_4867_b712_3472379ca931.slice - libcontainer container kubepods-burstable-podee08e93c_1ce2_4867_b712_3472379ca931.slice. Jan 20 03:16:54.604237 systemd[1]: Created slice kubepods-besteffort-podf813f3ef_562d_4e92_bd19_fa37c63ad294.slice - libcontainer container kubepods-besteffort-podf813f3ef_562d_4e92_bd19_fa37c63ad294.slice. Jan 20 03:16:54.621632 systemd[1]: Created slice kubepods-burstable-pod5646518a_7477_4fd5_b634_ed0d62c37fd4.slice - libcontainer container kubepods-burstable-pod5646518a_7477_4fd5_b634_ed0d62c37fd4.slice. Jan 20 03:16:54.633196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ef83074456db2b9b7306c3cef2e29b9c4a40d20e2e9bc41c8ad639898957aa6-rootfs.mount: Deactivated successfully. Jan 20 03:16:54.648229 systemd[1]: Created slice kubepods-besteffort-podb6dc1880_5e6d_4d78_bdb4_990b30c248de.slice - libcontainer container kubepods-besteffort-podb6dc1880_5e6d_4d78_bdb4_990b30c248de.slice. Jan 20 03:16:54.669963 systemd[1]: Created slice kubepods-besteffort-pod2fb46006_e4ca_4a17_9db5_a5327a1b235a.slice - libcontainer container kubepods-besteffort-pod2fb46006_e4ca_4a17_9db5_a5327a1b235a.slice. Jan 20 03:16:54.679476 kubelet[2893]: I0120 03:16:54.679383 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-backend-key-pair\") pod \"whisker-5cd6d8bbd6-zzfm8\" (UID: \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\") " pod="calico-system/whisker-5cd6d8bbd6-zzfm8" Jan 20 03:16:54.681603 kubelet[2893]: I0120 03:16:54.679777 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgjvk\" (UniqueName: \"kubernetes.io/projected/c27917ed-5aa3-4301-90f7-0eaca88cf88c-kube-api-access-lgjvk\") pod \"whisker-5cd6d8bbd6-zzfm8\" (UID: \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\") " pod="calico-system/whisker-5cd6d8bbd6-zzfm8" Jan 20 03:16:54.681603 kubelet[2893]: I0120 03:16:54.679826 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ec36c53c-7c05-428f-8474-ef17694fd900-goldmane-key-pair\") pod \"goldmane-666569f655-nbjcp\" (UID: \"ec36c53c-7c05-428f-8474-ef17694fd900\") " pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:16:54.681603 kubelet[2893]: I0120 03:16:54.679866 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226x7\" (UniqueName: \"kubernetes.io/projected/ec36c53c-7c05-428f-8474-ef17694fd900-kube-api-access-226x7\") pod \"goldmane-666569f655-nbjcp\" (UID: \"ec36c53c-7c05-428f-8474-ef17694fd900\") " pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:16:54.681603 kubelet[2893]: I0120 03:16:54.679923 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee08e93c-1ce2-4867-b712-3472379ca931-config-volume\") pod \"coredns-674b8bbfcf-k4tgb\" (UID: \"ee08e93c-1ce2-4867-b712-3472379ca931\") " pod="kube-system/coredns-674b8bbfcf-k4tgb" Jan 20 03:16:54.681603 kubelet[2893]: I0120 03:16:54.679990 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8dc\" (UniqueName: \"kubernetes.io/projected/2fb46006-e4ca-4a17-9db5-a5327a1b235a-kube-api-access-tw8dc\") pod \"calico-apiserver-747897bbb-jcg9v\" (UID: \"2fb46006-e4ca-4a17-9db5-a5327a1b235a\") " pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" Jan 20 03:16:54.681849 kubelet[2893]: I0120 03:16:54.680054 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5646518a-7477-4fd5-b634-ed0d62c37fd4-config-volume\") pod \"coredns-674b8bbfcf-bpfg4\" (UID: \"5646518a-7477-4fd5-b634-ed0d62c37fd4\") " pod="kube-system/coredns-674b8bbfcf-bpfg4" Jan 20 03:16:54.681849 kubelet[2893]: I0120 03:16:54.680115 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxbh\" (UniqueName: \"kubernetes.io/projected/ee08e93c-1ce2-4867-b712-3472379ca931-kube-api-access-7gxbh\") pod \"coredns-674b8bbfcf-k4tgb\" (UID: \"ee08e93c-1ce2-4867-b712-3472379ca931\") " pod="kube-system/coredns-674b8bbfcf-k4tgb" Jan 20 03:16:54.681849 kubelet[2893]: I0120 03:16:54.680161 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2fb46006-e4ca-4a17-9db5-a5327a1b235a-calico-apiserver-certs\") pod \"calico-apiserver-747897bbb-jcg9v\" (UID: \"2fb46006-e4ca-4a17-9db5-a5327a1b235a\") " pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" Jan 20 03:16:54.681849 kubelet[2893]: I0120 03:16:54.680204 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f813f3ef-562d-4e92-bd19-fa37c63ad294-tigera-ca-bundle\") pod \"calico-kube-controllers-cb458d8fc-vxcdh\" (UID: \"f813f3ef-562d-4e92-bd19-fa37c63ad294\") " pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" Jan 20 03:16:54.681849 kubelet[2893]: I0120 03:16:54.680240 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr5mg\" (UniqueName: \"kubernetes.io/projected/f813f3ef-562d-4e92-bd19-fa37c63ad294-kube-api-access-xr5mg\") pod \"calico-kube-controllers-cb458d8fc-vxcdh\" (UID: \"f813f3ef-562d-4e92-bd19-fa37c63ad294\") " pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" Jan 20 03:16:54.682073 kubelet[2893]: I0120 03:16:54.680276 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6dc1880-5e6d-4d78-bdb4-990b30c248de-calico-apiserver-certs\") pod \"calico-apiserver-747897bbb-28rcr\" (UID: \"b6dc1880-5e6d-4d78-bdb4-990b30c248de\") " pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" Jan 20 03:16:54.682073 kubelet[2893]: I0120 03:16:54.680311 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68wqv\" (UniqueName: \"kubernetes.io/projected/b6dc1880-5e6d-4d78-bdb4-990b30c248de-kube-api-access-68wqv\") pod \"calico-apiserver-747897bbb-28rcr\" (UID: \"b6dc1880-5e6d-4d78-bdb4-990b30c248de\") " pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" Jan 20 03:16:54.682073 kubelet[2893]: I0120 03:16:54.680344 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-ca-bundle\") pod \"whisker-5cd6d8bbd6-zzfm8\" (UID: \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\") " pod="calico-system/whisker-5cd6d8bbd6-zzfm8" Jan 20 03:16:54.682073 kubelet[2893]: I0120 03:16:54.680427 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec36c53c-7c05-428f-8474-ef17694fd900-config\") pod \"goldmane-666569f655-nbjcp\" (UID: \"ec36c53c-7c05-428f-8474-ef17694fd900\") " pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:16:54.682073 kubelet[2893]: I0120 03:16:54.680466 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec36c53c-7c05-428f-8474-ef17694fd900-goldmane-ca-bundle\") pod \"goldmane-666569f655-nbjcp\" (UID: \"ec36c53c-7c05-428f-8474-ef17694fd900\") " pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:16:54.682302 kubelet[2893]: I0120 03:16:54.680507 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8q2s\" (UniqueName: \"kubernetes.io/projected/5646518a-7477-4fd5-b634-ed0d62c37fd4-kube-api-access-f8q2s\") pod \"coredns-674b8bbfcf-bpfg4\" (UID: \"5646518a-7477-4fd5-b634-ed0d62c37fd4\") " pod="kube-system/coredns-674b8bbfcf-bpfg4" Jan 20 03:16:54.690890 systemd[1]: Created slice kubepods-besteffort-podec36c53c_7c05_428f_8474_ef17694fd900.slice - libcontainer container kubepods-besteffort-podec36c53c_7c05_428f_8474_ef17694fd900.slice. Jan 20 03:16:54.706587 systemd[1]: Created slice kubepods-besteffort-podc27917ed_5aa3_4301_90f7_0eaca88cf88c.slice - libcontainer container kubepods-besteffort-podc27917ed_5aa3_4301_90f7_0eaca88cf88c.slice. Jan 20 03:16:54.898164 containerd[1584]: time="2026-01-20T03:16:54.897391663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k4tgb,Uid:ee08e93c-1ce2-4867-b712-3472379ca931,Namespace:kube-system,Attempt:0,}" Jan 20 03:16:54.914117 containerd[1584]: time="2026-01-20T03:16:54.912851664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb458d8fc-vxcdh,Uid:f813f3ef-562d-4e92-bd19-fa37c63ad294,Namespace:calico-system,Attempt:0,}" Jan 20 03:16:54.940916 containerd[1584]: time="2026-01-20T03:16:54.940876991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfg4,Uid:5646518a-7477-4fd5-b634-ed0d62c37fd4,Namespace:kube-system,Attempt:0,}" Jan 20 03:16:54.974768 containerd[1584]: time="2026-01-20T03:16:54.974730835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-28rcr,Uid:b6dc1880-5e6d-4d78-bdb4-990b30c248de,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:16:54.984117 containerd[1584]: time="2026-01-20T03:16:54.984076767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-jcg9v,Uid:2fb46006-e4ca-4a17-9db5-a5327a1b235a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:16:55.001696 containerd[1584]: time="2026-01-20T03:16:55.001656714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nbjcp,Uid:ec36c53c-7c05-428f-8474-ef17694fd900,Namespace:calico-system,Attempt:0,}" Jan 20 03:16:55.017158 containerd[1584]: time="2026-01-20T03:16:55.017126404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd6d8bbd6-zzfm8,Uid:c27917ed-5aa3-4301-90f7-0eaca88cf88c,Namespace:calico-system,Attempt:0,}" Jan 20 03:16:55.232488 systemd[1]: Created slice kubepods-besteffort-pod4233551d_98b7_48f5_b9e1_45373c718e78.slice - libcontainer container kubepods-besteffort-pod4233551d_98b7_48f5_b9e1_45373c718e78.slice. Jan 20 03:16:55.243428 containerd[1584]: time="2026-01-20T03:16:55.243359725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfgpq,Uid:4233551d-98b7-48f5-b9e1-45373c718e78,Namespace:calico-system,Attempt:0,}" Jan 20 03:16:55.329956 containerd[1584]: time="2026-01-20T03:16:55.329878061Z" level=error msg="Failed to destroy network for sandbox \"548b6e6fa7a0cabc4275882286af10732825d4bf2d61f49e382497e97832bc66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.331291 containerd[1584]: time="2026-01-20T03:16:55.331221708Z" level=error msg="Failed to destroy network for sandbox \"1c0202788354bad651f2164c46d81eb6b6acc5f626fbcef4432f85d2133d19dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.338900 containerd[1584]: time="2026-01-20T03:16:55.338536968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-28rcr,Uid:b6dc1880-5e6d-4d78-bdb4-990b30c248de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"548b6e6fa7a0cabc4275882286af10732825d4bf2d61f49e382497e97832bc66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.350535 containerd[1584]: time="2026-01-20T03:16:55.350475369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k4tgb,Uid:ee08e93c-1ce2-4867-b712-3472379ca931,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0202788354bad651f2164c46d81eb6b6acc5f626fbcef4432f85d2133d19dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.373813 kubelet[2893]: E0120 03:16:55.373077 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"548b6e6fa7a0cabc4275882286af10732825d4bf2d61f49e382497e97832bc66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.373813 kubelet[2893]: E0120 03:16:55.373203 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"548b6e6fa7a0cabc4275882286af10732825d4bf2d61f49e382497e97832bc66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" Jan 20 03:16:55.373813 kubelet[2893]: E0120 03:16:55.373245 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"548b6e6fa7a0cabc4275882286af10732825d4bf2d61f49e382497e97832bc66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" Jan 20 03:16:55.373813 kubelet[2893]: E0120 03:16:55.373560 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0202788354bad651f2164c46d81eb6b6acc5f626fbcef4432f85d2133d19dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.374215 kubelet[2893]: E0120 03:16:55.373674 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0202788354bad651f2164c46d81eb6b6acc5f626fbcef4432f85d2133d19dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k4tgb" Jan 20 03:16:55.374215 kubelet[2893]: E0120 03:16:55.373731 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747897bbb-28rcr_calico-apiserver(b6dc1880-5e6d-4d78-bdb4-990b30c248de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747897bbb-28rcr_calico-apiserver(b6dc1880-5e6d-4d78-bdb4-990b30c248de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"548b6e6fa7a0cabc4275882286af10732825d4bf2d61f49e382497e97832bc66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:16:55.374855 kubelet[2893]: E0120 03:16:55.373710 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c0202788354bad651f2164c46d81eb6b6acc5f626fbcef4432f85d2133d19dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-k4tgb" Jan 20 03:16:55.374855 kubelet[2893]: E0120 03:16:55.374699 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-k4tgb_kube-system(ee08e93c-1ce2-4867-b712-3472379ca931)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-k4tgb_kube-system(ee08e93c-1ce2-4867-b712-3472379ca931)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c0202788354bad651f2164c46d81eb6b6acc5f626fbcef4432f85d2133d19dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-k4tgb" podUID="ee08e93c-1ce2-4867-b712-3472379ca931" Jan 20 03:16:55.378623 containerd[1584]: time="2026-01-20T03:16:55.378542962Z" level=error msg="Failed to destroy network for sandbox \"92da0e306da05214e4ae66741828776be8603ac6d910fe9264a4f8c9a41ee31a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.382538 containerd[1584]: time="2026-01-20T03:16:55.382471259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfg4,Uid:5646518a-7477-4fd5-b634-ed0d62c37fd4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da0e306da05214e4ae66741828776be8603ac6d910fe9264a4f8c9a41ee31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.383232 kubelet[2893]: E0120 03:16:55.383103 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da0e306da05214e4ae66741828776be8603ac6d910fe9264a4f8c9a41ee31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.383232 kubelet[2893]: E0120 03:16:55.383141 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da0e306da05214e4ae66741828776be8603ac6d910fe9264a4f8c9a41ee31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpfg4" Jan 20 03:16:55.384504 kubelet[2893]: E0120 03:16:55.383380 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92da0e306da05214e4ae66741828776be8603ac6d910fe9264a4f8c9a41ee31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpfg4" Jan 20 03:16:55.384504 kubelet[2893]: E0120 03:16:55.383461 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bpfg4_kube-system(5646518a-7477-4fd5-b634-ed0d62c37fd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bpfg4_kube-system(5646518a-7477-4fd5-b634-ed0d62c37fd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92da0e306da05214e4ae66741828776be8603ac6d910fe9264a4f8c9a41ee31a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bpfg4" podUID="5646518a-7477-4fd5-b634-ed0d62c37fd4" Jan 20 03:16:55.396319 containerd[1584]: time="2026-01-20T03:16:55.396275761Z" level=error msg="Failed to destroy network for sandbox \"b7969e6d79f093a54e66f92f84055f25bc409a7bad5ca2e3a1173cb777a84e05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.398257 containerd[1584]: time="2026-01-20T03:16:55.396385755Z" level=error msg="Failed to destroy network for sandbox \"3e0cf9cb5270a5e761b8a929fa318f5b550ae423ff86018494bba0fb2202ad53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.398951 containerd[1584]: time="2026-01-20T03:16:55.398910436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5cd6d8bbd6-zzfm8,Uid:c27917ed-5aa3-4301-90f7-0eaca88cf88c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7969e6d79f093a54e66f92f84055f25bc409a7bad5ca2e3a1173cb777a84e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.400497 containerd[1584]: time="2026-01-20T03:16:55.400451441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-jcg9v,Uid:2fb46006-e4ca-4a17-9db5-a5327a1b235a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0cf9cb5270a5e761b8a929fa318f5b550ae423ff86018494bba0fb2202ad53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.401282 kubelet[2893]: E0120 03:16:55.401194 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0cf9cb5270a5e761b8a929fa318f5b550ae423ff86018494bba0fb2202ad53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.401282 kubelet[2893]: E0120 03:16:55.401225 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7969e6d79f093a54e66f92f84055f25bc409a7bad5ca2e3a1173cb777a84e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.401402 kubelet[2893]: E0120 03:16:55.401311 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7969e6d79f093a54e66f92f84055f25bc409a7bad5ca2e3a1173cb777a84e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd6d8bbd6-zzfm8" Jan 20 03:16:55.401402 kubelet[2893]: E0120 03:16:55.401347 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7969e6d79f093a54e66f92f84055f25bc409a7bad5ca2e3a1173cb777a84e05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5cd6d8bbd6-zzfm8" Jan 20 03:16:55.401491 kubelet[2893]: E0120 03:16:55.401415 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5cd6d8bbd6-zzfm8_calico-system(c27917ed-5aa3-4301-90f7-0eaca88cf88c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5cd6d8bbd6-zzfm8_calico-system(c27917ed-5aa3-4301-90f7-0eaca88cf88c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7969e6d79f093a54e66f92f84055f25bc409a7bad5ca2e3a1173cb777a84e05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5cd6d8bbd6-zzfm8" podUID="c27917ed-5aa3-4301-90f7-0eaca88cf88c" Jan 20 03:16:55.401759 kubelet[2893]: E0120 03:16:55.401244 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0cf9cb5270a5e761b8a929fa318f5b550ae423ff86018494bba0fb2202ad53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" Jan 20 03:16:55.401759 kubelet[2893]: E0120 03:16:55.401622 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e0cf9cb5270a5e761b8a929fa318f5b550ae423ff86018494bba0fb2202ad53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" Jan 20 03:16:55.401759 kubelet[2893]: E0120 03:16:55.401706 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-747897bbb-jcg9v_calico-apiserver(2fb46006-e4ca-4a17-9db5-a5327a1b235a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-747897bbb-jcg9v_calico-apiserver(2fb46006-e4ca-4a17-9db5-a5327a1b235a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e0cf9cb5270a5e761b8a929fa318f5b550ae423ff86018494bba0fb2202ad53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:16:55.402289 containerd[1584]: time="2026-01-20T03:16:55.401955653Z" level=error msg="Failed to destroy network for sandbox \"bf5cee397a4201dd16395b313390f0266fd37ec6d552596f0fc5a200fde729ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.403521 containerd[1584]: time="2026-01-20T03:16:55.403483482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nbjcp,Uid:ec36c53c-7c05-428f-8474-ef17694fd900,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5cee397a4201dd16395b313390f0266fd37ec6d552596f0fc5a200fde729ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.404118 kubelet[2893]: E0120 03:16:55.403877 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5cee397a4201dd16395b313390f0266fd37ec6d552596f0fc5a200fde729ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.404118 kubelet[2893]: E0120 03:16:55.403919 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5cee397a4201dd16395b313390f0266fd37ec6d552596f0fc5a200fde729ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:16:55.404118 kubelet[2893]: E0120 03:16:55.403942 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf5cee397a4201dd16395b313390f0266fd37ec6d552596f0fc5a200fde729ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:16:55.404269 kubelet[2893]: E0120 03:16:55.403982 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf5cee397a4201dd16395b313390f0266fd37ec6d552596f0fc5a200fde729ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:16:55.426426 containerd[1584]: time="2026-01-20T03:16:55.425382755Z" level=error msg="Failed to destroy network for sandbox \"720465959b4c7f0b5ede196233008d497dafe339c84bfa75b6f875b2c9de0443\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.428084 containerd[1584]: time="2026-01-20T03:16:55.428041234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 03:16:55.430193 containerd[1584]: time="2026-01-20T03:16:55.430057372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb458d8fc-vxcdh,Uid:f813f3ef-562d-4e92-bd19-fa37c63ad294,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"720465959b4c7f0b5ede196233008d497dafe339c84bfa75b6f875b2c9de0443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.430733 kubelet[2893]: E0120 03:16:55.430557 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720465959b4c7f0b5ede196233008d497dafe339c84bfa75b6f875b2c9de0443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.430952 kubelet[2893]: E0120 03:16:55.430674 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720465959b4c7f0b5ede196233008d497dafe339c84bfa75b6f875b2c9de0443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" Jan 20 03:16:55.431126 kubelet[2893]: E0120 03:16:55.430849 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720465959b4c7f0b5ede196233008d497dafe339c84bfa75b6f875b2c9de0443\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" Jan 20 03:16:55.431476 kubelet[2893]: E0120 03:16:55.431210 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cb458d8fc-vxcdh_calico-system(f813f3ef-562d-4e92-bd19-fa37c63ad294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cb458d8fc-vxcdh_calico-system(f813f3ef-562d-4e92-bd19-fa37c63ad294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"720465959b4c7f0b5ede196233008d497dafe339c84bfa75b6f875b2c9de0443\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:16:55.472436 containerd[1584]: time="2026-01-20T03:16:55.472367759Z" level=error msg="Failed to destroy network for sandbox \"ae3c9028aff23352fbc5011ea5a1ff27204025bd6a7e39bb631a1f97390a0f1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.474141 containerd[1584]: time="2026-01-20T03:16:55.474080280Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfgpq,Uid:4233551d-98b7-48f5-b9e1-45373c718e78,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c9028aff23352fbc5011ea5a1ff27204025bd6a7e39bb631a1f97390a0f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.475088 kubelet[2893]: E0120 03:16:55.474510 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c9028aff23352fbc5011ea5a1ff27204025bd6a7e39bb631a1f97390a0f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:16:55.475088 kubelet[2893]: E0120 03:16:55.474579 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c9028aff23352fbc5011ea5a1ff27204025bd6a7e39bb631a1f97390a0f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:55.475088 kubelet[2893]: E0120 03:16:55.474643 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae3c9028aff23352fbc5011ea5a1ff27204025bd6a7e39bb631a1f97390a0f1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vfgpq" Jan 20 03:16:55.475389 kubelet[2893]: E0120 03:16:55.474722 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae3c9028aff23352fbc5011ea5a1ff27204025bd6a7e39bb631a1f97390a0f1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:17:01.380529 kubelet[2893]: I0120 03:17:01.380460 2893 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 03:17:05.270908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583434747.mount: Deactivated successfully. Jan 20 03:17:05.369632 containerd[1584]: time="2026-01-20T03:17:05.369372599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:17:05.371997 containerd[1584]: time="2026-01-20T03:17:05.357388012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 03:17:05.381127 containerd[1584]: time="2026-01-20T03:17:05.380741283Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:17:05.381656 containerd[1584]: time="2026-01-20T03:17:05.381542233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 03:17:05.387380 containerd[1584]: time="2026-01-20T03:17:05.387330312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.955213015s" Jan 20 03:17:05.387576 containerd[1584]: time="2026-01-20T03:17:05.387449680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 03:17:05.495469 containerd[1584]: time="2026-01-20T03:17:05.495409291Z" level=info msg="CreateContainer within sandbox \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 03:17:05.566910 containerd[1584]: time="2026-01-20T03:17:05.566802039Z" level=info msg="Container 7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:17:05.569672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911583954.mount: Deactivated successfully. Jan 20 03:17:05.659126 containerd[1584]: time="2026-01-20T03:17:05.659019913Z" level=info msg="CreateContainer within sandbox \"86abfb523abf28cf6afe60b7ba8a158206e444e3f20d65b74b25bcb5ee477205\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac\"" Jan 20 03:17:05.661404 containerd[1584]: time="2026-01-20T03:17:05.661270558Z" level=info msg="StartContainer for \"7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac\"" Jan 20 03:17:05.709088 containerd[1584]: time="2026-01-20T03:17:05.708996008Z" level=info msg="connecting to shim 7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac" address="unix:///run/containerd/s/291af159a5f3b70a3a6206bc424028379bdaf5aa35ae7a4cce85cf1411eedc35" protocol=ttrpc version=3 Jan 20 03:17:05.769864 systemd[1]: Started cri-containerd-7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac.scope - libcontainer container 7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac. Jan 20 03:17:05.944138 containerd[1584]: time="2026-01-20T03:17:05.943808066Z" level=info msg="StartContainer for \"7ba71f846fa3c26193c9595f76b490d8b55a611a34ab2bc6b06c9625f7e51cac\" returns successfully" Jan 20 03:17:06.222197 containerd[1584]: time="2026-01-20T03:17:06.222135319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfg4,Uid:5646518a-7477-4fd5-b634-ed0d62c37fd4,Namespace:kube-system,Attempt:0,}" Jan 20 03:17:06.224411 containerd[1584]: time="2026-01-20T03:17:06.222683677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nbjcp,Uid:ec36c53c-7c05-428f-8474-ef17694fd900,Namespace:calico-system,Attempt:0,}" Jan 20 03:17:06.536701 containerd[1584]: time="2026-01-20T03:17:06.534107150Z" level=error msg="Failed to destroy network for sandbox \"57bcaced4e324ed10a546f843af966691b27407098b8aeae30db3a1f2cb6f790\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:17:06.538944 systemd[1]: run-netns-cni\x2dd50356ae\x2dd9b0\x2da617\x2dd5da\x2dd1aef243deb3.mount: Deactivated successfully. Jan 20 03:17:06.544190 containerd[1584]: time="2026-01-20T03:17:06.544031487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfg4,Uid:5646518a-7477-4fd5-b634-ed0d62c37fd4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bcaced4e324ed10a546f843af966691b27407098b8aeae30db3a1f2cb6f790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:17:06.556141 containerd[1584]: time="2026-01-20T03:17:06.553155652Z" level=error msg="Failed to destroy network for sandbox \"a6b589c021a2f7ea55b69df5892915004f755e64ec7b7c9f90e8d646365d5244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:17:06.558011 systemd[1]: run-netns-cni\x2de27dfc5a\x2da8a9\x2d0af9\x2ded21\x2da4ac6f81a918.mount: Deactivated successfully. Jan 20 03:17:06.560981 containerd[1584]: time="2026-01-20T03:17:06.559180068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nbjcp,Uid:ec36c53c-7c05-428f-8474-ef17694fd900,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b589c021a2f7ea55b69df5892915004f755e64ec7b7c9f90e8d646365d5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:17:06.561630 kubelet[2893]: E0120 03:17:06.561413 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b589c021a2f7ea55b69df5892915004f755e64ec7b7c9f90e8d646365d5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:17:06.561630 kubelet[2893]: E0120 03:17:06.561493 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b589c021a2f7ea55b69df5892915004f755e64ec7b7c9f90e8d646365d5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:17:06.561630 kubelet[2893]: E0120 03:17:06.561532 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6b589c021a2f7ea55b69df5892915004f755e64ec7b7c9f90e8d646365d5244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nbjcp" Jan 20 03:17:06.564005 kubelet[2893]: E0120 03:17:06.563608 2893 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bcaced4e324ed10a546f843af966691b27407098b8aeae30db3a1f2cb6f790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 03:17:06.564005 kubelet[2893]: E0120 03:17:06.563653 2893 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bcaced4e324ed10a546f843af966691b27407098b8aeae30db3a1f2cb6f790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpfg4" Jan 20 03:17:06.564005 kubelet[2893]: E0120 03:17:06.563678 2893 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57bcaced4e324ed10a546f843af966691b27407098b8aeae30db3a1f2cb6f790\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bpfg4" Jan 20 03:17:06.564162 kubelet[2893]: E0120 03:17:06.563737 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bpfg4_kube-system(5646518a-7477-4fd5-b634-ed0d62c37fd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bpfg4_kube-system(5646518a-7477-4fd5-b634-ed0d62c37fd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57bcaced4e324ed10a546f843af966691b27407098b8aeae30db3a1f2cb6f790\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bpfg4" podUID="5646518a-7477-4fd5-b634-ed0d62c37fd4" Jan 20 03:17:06.564162 kubelet[2893]: E0120 03:17:06.563964 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6b589c021a2f7ea55b69df5892915004f755e64ec7b7c9f90e8d646365d5244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:17:06.625274 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 03:17:06.633809 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 03:17:06.660962 kubelet[2893]: I0120 03:17:06.656560 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k9wfd" podStartSLOduration=2.355582968 podStartE2EDuration="25.656528177s" podCreationTimestamp="2026-01-20 03:16:41 +0000 UTC" firstStartedPulling="2026-01-20 03:16:42.09284097 +0000 UTC m=+27.152311987" lastFinishedPulling="2026-01-20 03:17:05.393786183 +0000 UTC m=+50.453257196" observedRunningTime="2026-01-20 03:17:06.653133804 +0000 UTC m=+51.712604841" watchObservedRunningTime="2026-01-20 03:17:06.656528177 +0000 UTC m=+51.715999200" Jan 20 03:17:07.002082 kubelet[2893]: I0120 03:17:07.000340 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-ca-bundle\") pod \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\" (UID: \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\") " Jan 20 03:17:07.002082 kubelet[2893]: I0120 03:17:07.000429 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-backend-key-pair\") pod \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\" (UID: \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\") " Jan 20 03:17:07.002082 kubelet[2893]: I0120 03:17:07.000473 2893 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgjvk\" (UniqueName: \"kubernetes.io/projected/c27917ed-5aa3-4301-90f7-0eaca88cf88c-kube-api-access-lgjvk\") pod \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\" (UID: \"c27917ed-5aa3-4301-90f7-0eaca88cf88c\") " Jan 20 03:17:07.016616 kubelet[2893]: I0120 03:17:07.011979 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c27917ed-5aa3-4301-90f7-0eaca88cf88c" (UID: "c27917ed-5aa3-4301-90f7-0eaca88cf88c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 03:17:07.019851 systemd[1]: var-lib-kubelet-pods-c27917ed\x2d5aa3\x2d4301\x2d90f7\x2d0eaca88cf88c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlgjvk.mount: Deactivated successfully. Jan 20 03:17:07.025531 kubelet[2893]: I0120 03:17:07.022719 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c27917ed-5aa3-4301-90f7-0eaca88cf88c" (UID: "c27917ed-5aa3-4301-90f7-0eaca88cf88c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 03:17:07.031336 systemd[1]: var-lib-kubelet-pods-c27917ed\x2d5aa3\x2d4301\x2d90f7\x2d0eaca88cf88c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 03:17:07.035794 kubelet[2893]: I0120 03:17:07.035748 2893 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27917ed-5aa3-4301-90f7-0eaca88cf88c-kube-api-access-lgjvk" (OuterVolumeSpecName: "kube-api-access-lgjvk") pod "c27917ed-5aa3-4301-90f7-0eaca88cf88c" (UID: "c27917ed-5aa3-4301-90f7-0eaca88cf88c"). InnerVolumeSpecName "kube-api-access-lgjvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 03:17:07.117444 kubelet[2893]: I0120 03:17:07.117131 2893 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-ca-bundle\") on node \"srv-jqch3.gb1.brightbox.com\" DevicePath \"\"" Jan 20 03:17:07.117444 kubelet[2893]: I0120 03:17:07.117211 2893 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c27917ed-5aa3-4301-90f7-0eaca88cf88c-whisker-backend-key-pair\") on node \"srv-jqch3.gb1.brightbox.com\" DevicePath \"\"" Jan 20 03:17:07.117444 kubelet[2893]: I0120 03:17:07.117263 2893 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgjvk\" (UniqueName: \"kubernetes.io/projected/c27917ed-5aa3-4301-90f7-0eaca88cf88c-kube-api-access-lgjvk\") on node \"srv-jqch3.gb1.brightbox.com\" DevicePath \"\"" Jan 20 03:17:07.212471 containerd[1584]: time="2026-01-20T03:17:07.212026474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb458d8fc-vxcdh,Uid:f813f3ef-562d-4e92-bd19-fa37c63ad294,Namespace:calico-system,Attempt:0,}" Jan 20 03:17:07.238180 systemd[1]: Removed slice kubepods-besteffort-podc27917ed_5aa3_4301_90f7_0eaca88cf88c.slice - libcontainer container kubepods-besteffort-podc27917ed_5aa3_4301_90f7_0eaca88cf88c.slice. Jan 20 03:17:07.683283 systemd[1]: Started sshd@12-10.230.49.118:22-164.92.217.44:33332.service - OpenSSH per-connection server daemon (164.92.217.44:33332). Jan 20 03:17:07.743965 systemd[1]: Created slice kubepods-besteffort-pod372633f3_4d42_411f_aa34_da8a913ea6df.slice - libcontainer container kubepods-besteffort-pod372633f3_4d42_411f_aa34_da8a913ea6df.slice. Jan 20 03:17:07.837669 kubelet[2893]: I0120 03:17:07.837621 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/372633f3-4d42-411f-aa34-da8a913ea6df-whisker-ca-bundle\") pod \"whisker-675856cf68-jvm86\" (UID: \"372633f3-4d42-411f-aa34-da8a913ea6df\") " pod="calico-system/whisker-675856cf68-jvm86" Jan 20 03:17:07.839425 kubelet[2893]: I0120 03:17:07.839141 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24r9f\" (UniqueName: \"kubernetes.io/projected/372633f3-4d42-411f-aa34-da8a913ea6df-kube-api-access-24r9f\") pod \"whisker-675856cf68-jvm86\" (UID: \"372633f3-4d42-411f-aa34-da8a913ea6df\") " pod="calico-system/whisker-675856cf68-jvm86" Jan 20 03:17:07.839425 kubelet[2893]: I0120 03:17:07.839368 2893 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/372633f3-4d42-411f-aa34-da8a913ea6df-whisker-backend-key-pair\") pod \"whisker-675856cf68-jvm86\" (UID: \"372633f3-4d42-411f-aa34-da8a913ea6df\") " pod="calico-system/whisker-675856cf68-jvm86" Jan 20 03:17:07.885888 sshd[4119]: Invalid user search from 164.92.217.44 port 33332 Jan 20 03:17:07.892847 systemd-networkd[1484]: cali7f6eb2fefc0: Link UP Jan 20 03:17:07.896875 systemd-networkd[1484]: cali7f6eb2fefc0: Gained carrier Jan 20 03:17:07.928858 sshd[4119]: Connection closed by invalid user search 164.92.217.44 port 33332 [preauth] Jan 20 03:17:07.930662 systemd[1]: sshd@12-10.230.49.118:22-164.92.217.44:33332.service: Deactivated successfully. Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.285 [INFO][4065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.334 [INFO][4065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0 calico-kube-controllers-cb458d8fc- calico-system f813f3ef-562d-4e92-bd19-fa37c63ad294 840 0 2026-01-20 03:16:41 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cb458d8fc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com calico-kube-controllers-cb458d8fc-vxcdh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7f6eb2fefc0 [] [] }} ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.334 [INFO][4065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.550 [INFO][4080] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" HandleID="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.562 [INFO][4080] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" HandleID="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001021e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jqch3.gb1.brightbox.com", "pod":"calico-kube-controllers-cb458d8fc-vxcdh", "timestamp":"2026-01-20 03:17:07.550833801 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.562 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.563 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.563 [INFO][4080] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.599 [INFO][4080] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.614 [INFO][4080] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.662 [INFO][4080] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.725 [INFO][4080] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.778 [INFO][4080] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.778 [INFO][4080] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.786 [INFO][4080] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.820 [INFO][4080] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.860 [INFO][4080] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.193/26] block=192.168.12.192/26 handle="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.861 [INFO][4080] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.193/26] handle="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:07.980307 containerd[1584]: 2026-01-20 03:17:07.861 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:07.986863 containerd[1584]: 2026-01-20 03:17:07.861 [INFO][4080] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.193/26] IPv6=[] ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" HandleID="k8s-pod-network.461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:07.986863 containerd[1584]: 2026-01-20 03:17:07.867 [INFO][4065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0", GenerateName:"calico-kube-controllers-cb458d8fc-", Namespace:"calico-system", SelfLink:"", UID:"f813f3ef-562d-4e92-bd19-fa37c63ad294", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cb458d8fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-cb458d8fc-vxcdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f6eb2fefc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:07.986863 containerd[1584]: 2026-01-20 03:17:07.867 [INFO][4065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.193/32] ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:07.986863 containerd[1584]: 2026-01-20 03:17:07.867 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f6eb2fefc0 ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:07.986863 containerd[1584]: 2026-01-20 03:17:07.899 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:07.986863 containerd[1584]: 2026-01-20 03:17:07.899 [INFO][4065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0", GenerateName:"calico-kube-controllers-cb458d8fc-", Namespace:"calico-system", SelfLink:"", UID:"f813f3ef-562d-4e92-bd19-fa37c63ad294", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cb458d8fc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd", Pod:"calico-kube-controllers-cb458d8fc-vxcdh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7f6eb2fefc0", MAC:"9a:00:63:91:23:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:07.987293 containerd[1584]: 2026-01-20 03:17:07.971 [INFO][4065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" Namespace="calico-system" Pod="calico-kube-controllers-cb458d8fc-vxcdh" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--kube--controllers--cb458d8fc--vxcdh-eth0" Jan 20 03:17:08.056848 containerd[1584]: time="2026-01-20T03:17:08.052402651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675856cf68-jvm86,Uid:372633f3-4d42-411f-aa34-da8a913ea6df,Namespace:calico-system,Attempt:0,}" Jan 20 03:17:08.225278 containerd[1584]: time="2026-01-20T03:17:08.224805575Z" level=info msg="connecting to shim 461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd" address="unix:///run/containerd/s/73af4de0f67989f589476bb2fdfa6a93822cddb4a76877736179230207575f6d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:08.255917 systemd-networkd[1484]: cali502b858f749: Link UP Jan 20 03:17:08.259008 systemd-networkd[1484]: cali502b858f749: Gained carrier Jan 20 03:17:08.299984 systemd[1]: Started cri-containerd-461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd.scope - libcontainer container 461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd. Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.106 [INFO][4134] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.127 [INFO][4134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0 whisker-675856cf68- calico-system 372633f3-4d42-411f-aa34-da8a913ea6df 928 0 2026-01-20 03:17:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:675856cf68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com whisker-675856cf68-jvm86 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali502b858f749 [] [] }} ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.129 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.179 [INFO][4149] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" HandleID="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Workload="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.180 [INFO][4149] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" HandleID="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Workload="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfe40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jqch3.gb1.brightbox.com", "pod":"whisker-675856cf68-jvm86", "timestamp":"2026-01-20 03:17:08.179593046 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.180 [INFO][4149] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.180 [INFO][4149] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.180 [INFO][4149] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.194 [INFO][4149] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.201 [INFO][4149] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.210 [INFO][4149] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.214 [INFO][4149] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.219 [INFO][4149] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.220 [INFO][4149] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.222 [INFO][4149] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52 Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.231 [INFO][4149] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.244 [INFO][4149] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.194/26] block=192.168.12.192/26 handle="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.244 [INFO][4149] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.194/26] handle="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.244 [INFO][4149] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:08.302486 containerd[1584]: 2026-01-20 03:17:08.245 [INFO][4149] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.194/26] IPv6=[] ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" HandleID="k8s-pod-network.ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Workload="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.304563 containerd[1584]: 2026-01-20 03:17:08.251 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0", GenerateName:"whisker-675856cf68-", Namespace:"calico-system", SelfLink:"", UID:"372633f3-4d42-411f-aa34-da8a913ea6df", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675856cf68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"whisker-675856cf68-jvm86", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali502b858f749", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:08.304563 containerd[1584]: 2026-01-20 03:17:08.251 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.194/32] ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.304563 containerd[1584]: 2026-01-20 03:17:08.251 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali502b858f749 ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.304563 containerd[1584]: 2026-01-20 03:17:08.261 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.304563 containerd[1584]: 2026-01-20 03:17:08.265 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0", GenerateName:"whisker-675856cf68-", Namespace:"calico-system", SelfLink:"", UID:"372633f3-4d42-411f-aa34-da8a913ea6df", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 17, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675856cf68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52", Pod:"whisker-675856cf68-jvm86", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali502b858f749", MAC:"4a:1e:70:28:e4:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:08.304563 containerd[1584]: 2026-01-20 03:17:08.295 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" Namespace="calico-system" Pod="whisker-675856cf68-jvm86" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-whisker--675856cf68--jvm86-eth0" Jan 20 03:17:08.347254 containerd[1584]: time="2026-01-20T03:17:08.346827499Z" level=info msg="connecting to shim ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52" address="unix:///run/containerd/s/c2faab998ece1075e4a6384f5111169dd30d4864d75a0370b78315c06f7e330f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:08.391023 systemd[1]: Started cri-containerd-ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52.scope - libcontainer container ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52. Jan 20 03:17:08.437957 containerd[1584]: time="2026-01-20T03:17:08.437860277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cb458d8fc-vxcdh,Uid:f813f3ef-562d-4e92-bd19-fa37c63ad294,Namespace:calico-system,Attempt:0,} returns sandbox id \"461fc33f328164085118ca1039e03a0d0e1819091638bdda4583449657b2adbd\"" Jan 20 03:17:08.440439 containerd[1584]: time="2026-01-20T03:17:08.440410380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:17:08.749249 containerd[1584]: time="2026-01-20T03:17:08.749143416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675856cf68-jvm86,Uid:372633f3-4d42-411f-aa34-da8a913ea6df,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba4eb66b434203292afefed146c5a5635e75bcd33453174792c5cd578cfcab52\"" Jan 20 03:17:08.797389 containerd[1584]: time="2026-01-20T03:17:08.797334417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:08.799013 containerd[1584]: time="2026-01-20T03:17:08.798982991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 03:17:08.799088 containerd[1584]: time="2026-01-20T03:17:08.799032839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:17:08.799532 kubelet[2893]: E0120 03:17:08.799483 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:17:08.799677 kubelet[2893]: E0120 03:17:08.799562 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:17:08.800950 containerd[1584]: time="2026-01-20T03:17:08.800919935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:17:08.823899 kubelet[2893]: E0120 03:17:08.823712 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr5mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cb458d8fc-vxcdh_calico-system(f813f3ef-562d-4e92-bd19-fa37c63ad294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:08.825091 kubelet[2893]: E0120 03:17:08.825003 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:17:09.137208 containerd[1584]: time="2026-01-20T03:17:09.136402121Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:09.138073 containerd[1584]: time="2026-01-20T03:17:09.137569084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:17:09.138073 containerd[1584]: time="2026-01-20T03:17:09.137677644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:17:09.138983 kubelet[2893]: E0120 03:17:09.138545 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:17:09.138983 kubelet[2893]: E0120 03:17:09.138687 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:17:09.138983 kubelet[2893]: E0120 03:17:09.138888 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:05f348d2e5bc42b3908323f8f106888c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:09.141270 containerd[1584]: time="2026-01-20T03:17:09.141206320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:17:09.215418 kubelet[2893]: I0120 03:17:09.215056 2893 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27917ed-5aa3-4301-90f7-0eaca88cf88c" path="/var/lib/kubelet/pods/c27917ed-5aa3-4301-90f7-0eaca88cf88c/volumes" Jan 20 03:17:09.216964 containerd[1584]: time="2026-01-20T03:17:09.216865560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-28rcr,Uid:b6dc1880-5e6d-4d78-bdb4-990b30c248de,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:17:09.224375 containerd[1584]: time="2026-01-20T03:17:09.224340100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfgpq,Uid:4233551d-98b7-48f5-b9e1-45373c718e78,Namespace:calico-system,Attempt:0,}" Jan 20 03:17:09.490508 systemd-networkd[1484]: caliccd5ad8e5ad: Link UP Jan 20 03:17:09.491518 systemd-networkd[1484]: caliccd5ad8e5ad: Gained carrier Jan 20 03:17:09.498621 containerd[1584]: time="2026-01-20T03:17:09.497421897Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:09.500834 containerd[1584]: time="2026-01-20T03:17:09.500240881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:17:09.501140 kubelet[2893]: E0120 03:17:09.501097 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:17:09.502666 containerd[1584]: time="2026-01-20T03:17:09.502629820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:17:09.503344 kubelet[2893]: E0120 03:17:09.502773 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:17:09.503344 kubelet[2893]: E0120 03:17:09.502961 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:09.504800 kubelet[2893]: E0120 03:17:09.504750 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.357 [INFO][4380] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0 csi-node-driver- calico-system 4233551d-98b7-48f5-b9e1-45373c718e78 728 0 2026-01-20 03:16:41 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com csi-node-driver-vfgpq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliccd5ad8e5ad [] [] }} ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.357 [INFO][4380] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.419 [INFO][4402] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" HandleID="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Workload="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.421 [INFO][4402] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" HandleID="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Workload="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jqch3.gb1.brightbox.com", "pod":"csi-node-driver-vfgpq", "timestamp":"2026-01-20 03:17:09.419326547 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.421 [INFO][4402] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.421 [INFO][4402] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.421 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.437 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.446 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.452 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.455 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.458 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.458 [INFO][4402] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.461 [INFO][4402] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.468 [INFO][4402] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.475 [INFO][4402] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.195/26] block=192.168.12.192/26 handle="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.475 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.195/26] handle="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.475 [INFO][4402] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:09.539993 containerd[1584]: 2026-01-20 03:17:09.476 [INFO][4402] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.195/26] IPv6=[] ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" HandleID="k8s-pod-network.e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Workload="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.544949 containerd[1584]: 2026-01-20 03:17:09.481 [INFO][4380] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4233551d-98b7-48f5-b9e1-45373c718e78", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-vfgpq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd5ad8e5ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:09.544949 containerd[1584]: 2026-01-20 03:17:09.482 [INFO][4380] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.195/32] ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.544949 containerd[1584]: 2026-01-20 03:17:09.482 [INFO][4380] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd5ad8e5ad ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.544949 containerd[1584]: 2026-01-20 03:17:09.492 [INFO][4380] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.544949 containerd[1584]: 2026-01-20 03:17:09.498 [INFO][4380] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4233551d-98b7-48f5-b9e1-45373c718e78", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d", Pod:"csi-node-driver-vfgpq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd5ad8e5ad", MAC:"ea:54:6d:35:c2:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:09.544949 containerd[1584]: 2026-01-20 03:17:09.532 [INFO][4380] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" Namespace="calico-system" Pod="csi-node-driver-vfgpq" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-csi--node--driver--vfgpq-eth0" Jan 20 03:17:09.557720 kubelet[2893]: E0120 03:17:09.557176 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:17:09.557865 kubelet[2893]: E0120 03:17:09.557726 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:17:09.592968 systemd-networkd[1484]: cali7f6eb2fefc0: Gained IPv6LL Jan 20 03:17:09.603162 containerd[1584]: time="2026-01-20T03:17:09.603097088Z" level=info msg="connecting to shim e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d" address="unix:///run/containerd/s/269fe9060c535f9637dfb17f844a4eb8eeecd9a97806fc2e1011c6a1435c0404" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:09.697927 systemd[1]: Started cri-containerd-e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d.scope - libcontainer container e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d. Jan 20 03:17:09.701768 systemd-networkd[1484]: cali2b72ffda368: Link UP Jan 20 03:17:09.704685 systemd-networkd[1484]: cali2b72ffda368: Gained carrier Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.349 [INFO][4375] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0 calico-apiserver-747897bbb- calico-apiserver b6dc1880-5e6d-4d78-bdb4-990b30c248de 842 0 2026-01-20 03:16:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747897bbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com calico-apiserver-747897bbb-28rcr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2b72ffda368 [] [] }} ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.350 [INFO][4375] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.448 [INFO][4400] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" HandleID="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.448 [INFO][4400] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" HandleID="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003280a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-jqch3.gb1.brightbox.com", "pod":"calico-apiserver-747897bbb-28rcr", "timestamp":"2026-01-20 03:17:09.448120186 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.448 [INFO][4400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.475 [INFO][4400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.476 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.537 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.550 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.576 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.595 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.612 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.613 [INFO][4400] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.622 [INFO][4400] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7 Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.647 [INFO][4400] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.676 [INFO][4400] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.196/26] block=192.168.12.192/26 handle="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.676 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.196/26] handle="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.676 [INFO][4400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:09.751354 containerd[1584]: 2026-01-20 03:17:09.676 [INFO][4400] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.196/26] IPv6=[] ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" HandleID="k8s-pod-network.d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.754183 containerd[1584]: 2026-01-20 03:17:09.688 [INFO][4375] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0", GenerateName:"calico-apiserver-747897bbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6dc1880-5e6d-4d78-bdb4-990b30c248de", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747897bbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-747897bbb-28rcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b72ffda368", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:09.754183 containerd[1584]: 2026-01-20 03:17:09.688 [INFO][4375] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.196/32] ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.754183 containerd[1584]: 2026-01-20 03:17:09.688 [INFO][4375] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b72ffda368 ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.754183 containerd[1584]: 2026-01-20 03:17:09.724 [INFO][4375] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.754183 containerd[1584]: 2026-01-20 03:17:09.731 [INFO][4375] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0", GenerateName:"calico-apiserver-747897bbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6dc1880-5e6d-4d78-bdb4-990b30c248de", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747897bbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7", Pod:"calico-apiserver-747897bbb-28rcr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2b72ffda368", MAC:"1a:81:65:c8:1c:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:09.754183 containerd[1584]: 2026-01-20 03:17:09.746 [INFO][4375] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-28rcr" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--28rcr-eth0" Jan 20 03:17:09.793994 containerd[1584]: time="2026-01-20T03:17:09.793870049Z" level=info msg="connecting to shim d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7" address="unix:///run/containerd/s/243dc27095fae99ae076f5bafba6106bce988b92c5bc50f57939d7cf0cc540f6" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:09.878805 systemd[1]: Started cri-containerd-d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7.scope - libcontainer container d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7. Jan 20 03:17:09.926796 containerd[1584]: time="2026-01-20T03:17:09.926624582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vfgpq,Uid:4233551d-98b7-48f5-b9e1-45373c718e78,Namespace:calico-system,Attempt:0,} returns sandbox id \"e86e4a7a47280d00398a97b64e5af067ffcaed3f7bf17b6eae2594bde998758d\"" Jan 20 03:17:09.931612 containerd[1584]: time="2026-01-20T03:17:09.931544254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:17:09.972780 systemd-networkd[1484]: cali502b858f749: Gained IPv6LL Jan 20 03:17:10.053422 systemd-networkd[1484]: vxlan.calico: Link UP Jan 20 03:17:10.053434 systemd-networkd[1484]: vxlan.calico: Gained carrier Jan 20 03:17:10.171686 containerd[1584]: time="2026-01-20T03:17:10.171633366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-28rcr,Uid:b6dc1880-5e6d-4d78-bdb4-990b30c248de,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d5e9e28f56ba7767bea422a74ba7d5839b791e4de5b615f9e83d3db8dd23b5b7\"" Jan 20 03:17:10.213490 containerd[1584]: time="2026-01-20T03:17:10.213367625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k4tgb,Uid:ee08e93c-1ce2-4867-b712-3472379ca931,Namespace:kube-system,Attempt:0,}" Jan 20 03:17:10.303503 containerd[1584]: time="2026-01-20T03:17:10.303448183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:10.304609 containerd[1584]: time="2026-01-20T03:17:10.304524481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:17:10.305044 containerd[1584]: time="2026-01-20T03:17:10.304869260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:17:10.305198 kubelet[2893]: E0120 03:17:10.304998 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:17:10.305198 kubelet[2893]: E0120 03:17:10.305160 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:17:10.306537 containerd[1584]: time="2026-01-20T03:17:10.306268988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:17:10.306797 kubelet[2893]: E0120 03:17:10.306630 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:10.400486 systemd-networkd[1484]: cali253613469c3: Link UP Jan 20 03:17:10.402328 systemd-networkd[1484]: cali253613469c3: Gained carrier Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.276 [INFO][4567] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0 coredns-674b8bbfcf- kube-system ee08e93c-1ce2-4867-b712-3472379ca931 835 0 2026-01-20 03:16:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com coredns-674b8bbfcf-k4tgb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali253613469c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.276 [INFO][4567] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.324 [INFO][4578] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" HandleID="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Workload="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.325 [INFO][4578] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" HandleID="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Workload="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef90), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-jqch3.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-k4tgb", "timestamp":"2026-01-20 03:17:10.324846538 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.325 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.325 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.325 [INFO][4578] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.336 [INFO][4578] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.346 [INFO][4578] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.355 [INFO][4578] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.357 [INFO][4578] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.361 [INFO][4578] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.361 [INFO][4578] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.363 [INFO][4578] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4 Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.371 [INFO][4578] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.383 [INFO][4578] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.197/26] block=192.168.12.192/26 handle="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.386 [INFO][4578] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.197/26] handle="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.386 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:10.429321 containerd[1584]: 2026-01-20 03:17:10.386 [INFO][4578] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.197/26] IPv6=[] ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" HandleID="k8s-pod-network.07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Workload="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.432719 containerd[1584]: 2026-01-20 03:17:10.395 [INFO][4567] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee08e93c-1ce2-4867-b712-3472379ca931", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-k4tgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali253613469c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:10.432719 containerd[1584]: 2026-01-20 03:17:10.396 [INFO][4567] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.197/32] ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.432719 containerd[1584]: 2026-01-20 03:17:10.396 [INFO][4567] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali253613469c3 ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.432719 containerd[1584]: 2026-01-20 03:17:10.403 [INFO][4567] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.432719 containerd[1584]: 2026-01-20 03:17:10.404 [INFO][4567] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ee08e93c-1ce2-4867-b712-3472379ca931", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4", Pod:"coredns-674b8bbfcf-k4tgb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali253613469c3", MAC:"4a:63:b8:f2:1e:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:10.433047 containerd[1584]: 2026-01-20 03:17:10.425 [INFO][4567] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" Namespace="kube-system" Pod="coredns-674b8bbfcf-k4tgb" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--k4tgb-eth0" Jan 20 03:17:10.458303 containerd[1584]: time="2026-01-20T03:17:10.458123519Z" level=info msg="connecting to shim 07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4" address="unix:///run/containerd/s/1f5f267657ae52db5b7ca4757af1ad888b91b174a51c6fd3852d79c4da4ead21" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:10.495814 systemd[1]: Started cri-containerd-07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4.scope - libcontainer container 07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4. Jan 20 03:17:10.567092 kubelet[2893]: E0120 03:17:10.567025 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:17:10.626111 containerd[1584]: time="2026-01-20T03:17:10.625892388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:10.629331 containerd[1584]: time="2026-01-20T03:17:10.628841906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:17:10.629331 containerd[1584]: time="2026-01-20T03:17:10.628916369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:10.629451 kubelet[2893]: E0120 03:17:10.629039 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:10.629451 kubelet[2893]: E0120 03:17:10.629087 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:10.632412 containerd[1584]: time="2026-01-20T03:17:10.632364039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:17:10.637181 kubelet[2893]: E0120 03:17:10.637118 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68wqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-28rcr_calico-apiserver(b6dc1880-5e6d-4d78-bdb4-990b30c248de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:10.638347 kubelet[2893]: E0120 03:17:10.638297 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:17:10.659680 containerd[1584]: time="2026-01-20T03:17:10.659478344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-k4tgb,Uid:ee08e93c-1ce2-4867-b712-3472379ca931,Namespace:kube-system,Attempt:0,} returns sandbox id \"07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4\"" Jan 20 03:17:10.673832 containerd[1584]: time="2026-01-20T03:17:10.673779563Z" level=info msg="CreateContainer within sandbox \"07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 03:17:10.696737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434209442.mount: Deactivated successfully. Jan 20 03:17:10.697852 containerd[1584]: time="2026-01-20T03:17:10.696928669Z" level=info msg="Container ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:17:10.710164 containerd[1584]: time="2026-01-20T03:17:10.710074634Z" level=info msg="CreateContainer within sandbox \"07bade60f730e1eb495943eaee79d995250263052135cd295acdb059a90b5aa4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911\"" Jan 20 03:17:10.712926 containerd[1584]: time="2026-01-20T03:17:10.712858725Z" level=info msg="StartContainer for \"ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911\"" Jan 20 03:17:10.714504 containerd[1584]: time="2026-01-20T03:17:10.714402982Z" level=info msg="connecting to shim ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911" address="unix:///run/containerd/s/1f5f267657ae52db5b7ca4757af1ad888b91b174a51c6fd3852d79c4da4ead21" protocol=ttrpc version=3 Jan 20 03:17:10.758792 systemd[1]: Started cri-containerd-ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911.scope - libcontainer container ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911. Jan 20 03:17:10.843471 containerd[1584]: time="2026-01-20T03:17:10.843385499Z" level=info msg="StartContainer for \"ce8974123aa82cd82566b742b3c9f6a19003794f29ac78962ea46001e944d911\" returns successfully" Jan 20 03:17:10.930797 systemd-networkd[1484]: caliccd5ad8e5ad: Gained IPv6LL Jan 20 03:17:10.941107 containerd[1584]: time="2026-01-20T03:17:10.940971068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:10.943929 containerd[1584]: time="2026-01-20T03:17:10.943872885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:17:10.944149 containerd[1584]: time="2026-01-20T03:17:10.943974896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:17:10.944662 kubelet[2893]: E0120 03:17:10.944568 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:17:10.944869 kubelet[2893]: E0120 03:17:10.944799 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:17:10.945065 kubelet[2893]: E0120 03:17:10.944983 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:10.946319 kubelet[2893]: E0120 03:17:10.946249 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:17:11.214928 containerd[1584]: time="2026-01-20T03:17:11.213662863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-jcg9v,Uid:2fb46006-e4ca-4a17-9db5-a5327a1b235a,Namespace:calico-apiserver,Attempt:0,}" Jan 20 03:17:11.378862 systemd-networkd[1484]: cali2b72ffda368: Gained IPv6LL Jan 20 03:17:11.380554 systemd-networkd[1484]: vxlan.calico: Gained IPv6LL Jan 20 03:17:11.400350 systemd-networkd[1484]: cali9e4e6e12b2d: Link UP Jan 20 03:17:11.402169 systemd-networkd[1484]: cali9e4e6e12b2d: Gained carrier Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.303 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0 calico-apiserver-747897bbb- calico-apiserver 2fb46006-e4ca-4a17-9db5-a5327a1b235a 843 0 2026-01-20 03:16:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:747897bbb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com calico-apiserver-747897bbb-jcg9v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9e4e6e12b2d [] [] }} ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.303 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.343 [INFO][4726] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" HandleID="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.343 [INFO][4726] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" HandleID="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-jqch3.gb1.brightbox.com", "pod":"calico-apiserver-747897bbb-jcg9v", "timestamp":"2026-01-20 03:17:11.343675872 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.343 [INFO][4726] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.344 [INFO][4726] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.345 [INFO][4726] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.357 [INFO][4726] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.364 [INFO][4726] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.370 [INFO][4726] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.373 [INFO][4726] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.376 [INFO][4726] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.376 [INFO][4726] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.379 [INFO][4726] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931 Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.385 [INFO][4726] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.392 [INFO][4726] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.198/26] block=192.168.12.192/26 handle="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.392 [INFO][4726] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.198/26] handle="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.392 [INFO][4726] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:11.422577 containerd[1584]: 2026-01-20 03:17:11.392 [INFO][4726] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.198/26] IPv6=[] ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" HandleID="k8s-pod-network.f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Workload="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.425149 containerd[1584]: 2026-01-20 03:17:11.396 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0", GenerateName:"calico-apiserver-747897bbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2fb46006-e4ca-4a17-9db5-a5327a1b235a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747897bbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-747897bbb-jcg9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e4e6e12b2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:11.425149 containerd[1584]: 2026-01-20 03:17:11.396 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.198/32] ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.425149 containerd[1584]: 2026-01-20 03:17:11.396 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9e4e6e12b2d ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.425149 containerd[1584]: 2026-01-20 03:17:11.403 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.425149 containerd[1584]: 2026-01-20 03:17:11.403 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0", GenerateName:"calico-apiserver-747897bbb-", Namespace:"calico-apiserver", SelfLink:"", UID:"2fb46006-e4ca-4a17-9db5-a5327a1b235a", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"747897bbb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931", Pod:"calico-apiserver-747897bbb-jcg9v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9e4e6e12b2d", MAC:"0e:e6:df:91:e8:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:11.425149 containerd[1584]: 2026-01-20 03:17:11.418 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" Namespace="calico-apiserver" Pod="calico-apiserver-747897bbb-jcg9v" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-calico--apiserver--747897bbb--jcg9v-eth0" Jan 20 03:17:11.455880 containerd[1584]: time="2026-01-20T03:17:11.455816380Z" level=info msg="connecting to shim f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931" address="unix:///run/containerd/s/408e60376e59a26ed52b1c6ec1832b39f43b24eaf090a53936c53776b3c981f3" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:11.489868 systemd[1]: Started cri-containerd-f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931.scope - libcontainer container f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931. Jan 20 03:17:11.558528 containerd[1584]: time="2026-01-20T03:17:11.558468127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-747897bbb-jcg9v,Uid:2fb46006-e4ca-4a17-9db5-a5327a1b235a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f5b72ed7af517e685f7e9fe6b9aee521fedaa78d74d8c77fd5f2ac5b25489931\"" Jan 20 03:17:11.564103 containerd[1584]: time="2026-01-20T03:17:11.563826361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:17:11.598059 kubelet[2893]: E0120 03:17:11.597866 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:17:11.599969 kubelet[2893]: E0120 03:17:11.599345 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:17:11.643656 kubelet[2893]: I0120 03:17:11.642889 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-k4tgb" podStartSLOduration=50.642869749 podStartE2EDuration="50.642869749s" podCreationTimestamp="2026-01-20 03:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:17:11.62334632 +0000 UTC m=+56.682817352" watchObservedRunningTime="2026-01-20 03:17:11.642869749 +0000 UTC m=+56.702340832" Jan 20 03:17:11.698815 systemd-networkd[1484]: cali253613469c3: Gained IPv6LL Jan 20 03:17:11.884174 containerd[1584]: time="2026-01-20T03:17:11.883974062Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:11.886314 containerd[1584]: time="2026-01-20T03:17:11.886251239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:17:11.886395 containerd[1584]: time="2026-01-20T03:17:11.886365911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:11.887036 kubelet[2893]: E0120 03:17:11.886626 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:11.887036 kubelet[2893]: E0120 03:17:11.886704 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:11.887036 kubelet[2893]: E0120 03:17:11.886879 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw8dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-jcg9v_calico-apiserver(2fb46006-e4ca-4a17-9db5-a5327a1b235a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:11.888704 kubelet[2893]: E0120 03:17:11.888642 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:17:12.597370 kubelet[2893]: E0120 03:17:12.597304 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:17:13.106857 systemd-networkd[1484]: cali9e4e6e12b2d: Gained IPv6LL Jan 20 03:17:18.210498 containerd[1584]: time="2026-01-20T03:17:18.210381246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfg4,Uid:5646518a-7477-4fd5-b634-ed0d62c37fd4,Namespace:kube-system,Attempt:0,}" Jan 20 03:17:18.211694 containerd[1584]: time="2026-01-20T03:17:18.210383349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nbjcp,Uid:ec36c53c-7c05-428f-8474-ef17694fd900,Namespace:calico-system,Attempt:0,}" Jan 20 03:17:18.414382 systemd-networkd[1484]: cali4c71a882d9f: Link UP Jan 20 03:17:18.415785 systemd-networkd[1484]: cali4c71a882d9f: Gained carrier Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.296 [INFO][4809] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0 goldmane-666569f655- calico-system ec36c53c-7c05-428f-8474-ef17694fd900 844 0 2026-01-20 03:16:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com goldmane-666569f655-nbjcp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4c71a882d9f [] [] }} ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.296 [INFO][4809] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.353 [INFO][4831] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" HandleID="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Workload="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.354 [INFO][4831] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" HandleID="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Workload="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-jqch3.gb1.brightbox.com", "pod":"goldmane-666569f655-nbjcp", "timestamp":"2026-01-20 03:17:18.353409157 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.354 [INFO][4831] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.354 [INFO][4831] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.354 [INFO][4831] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.367 [INFO][4831] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.376 [INFO][4831] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.381 [INFO][4831] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.384 [INFO][4831] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.387 [INFO][4831] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.387 [INFO][4831] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.389 [INFO][4831] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6 Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.395 [INFO][4831] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.403 [INFO][4831] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.199/26] block=192.168.12.192/26 handle="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.403 [INFO][4831] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.199/26] handle="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.403 [INFO][4831] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:18.450026 containerd[1584]: 2026-01-20 03:17:18.403 [INFO][4831] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.199/26] IPv6=[] ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" HandleID="k8s-pod-network.f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Workload="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.451887 containerd[1584]: 2026-01-20 03:17:18.407 [INFO][4809] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ec36c53c-7c05-428f-8474-ef17694fd900", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-nbjcp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c71a882d9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:18.451887 containerd[1584]: 2026-01-20 03:17:18.407 [INFO][4809] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.199/32] ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.451887 containerd[1584]: 2026-01-20 03:17:18.407 [INFO][4809] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c71a882d9f ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.451887 containerd[1584]: 2026-01-20 03:17:18.417 [INFO][4809] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.451887 containerd[1584]: 2026-01-20 03:17:18.420 [INFO][4809] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ec36c53c-7c05-428f-8474-ef17694fd900", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6", Pod:"goldmane-666569f655-nbjcp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c71a882d9f", MAC:"f6:c0:df:32:d1:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:18.451887 containerd[1584]: 2026-01-20 03:17:18.438 [INFO][4809] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" Namespace="calico-system" Pod="goldmane-666569f655-nbjcp" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-goldmane--666569f655--nbjcp-eth0" Jan 20 03:17:18.516727 containerd[1584]: time="2026-01-20T03:17:18.516528752Z" level=info msg="connecting to shim f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6" address="unix:///run/containerd/s/d32dd9051cd9ea3367f889ada8014a24405d830d21d11ef313695e8304eb0d5a" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:18.549244 systemd-networkd[1484]: calif47b255cf02: Link UP Jan 20 03:17:18.550262 systemd-networkd[1484]: calif47b255cf02: Gained carrier Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.312 [INFO][4806] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0 coredns-674b8bbfcf- kube-system 5646518a-7477-4fd5-b634-ed0d62c37fd4 841 0 2026-01-20 03:16:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-jqch3.gb1.brightbox.com coredns-674b8bbfcf-bpfg4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif47b255cf02 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.312 [INFO][4806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.361 [INFO][4836] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" HandleID="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Workload="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.362 [INFO][4836] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" HandleID="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Workload="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-jqch3.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-bpfg4", "timestamp":"2026-01-20 03:17:18.361797657 +0000 UTC"}, Hostname:"srv-jqch3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.362 [INFO][4836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.403 [INFO][4836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.404 [INFO][4836] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-jqch3.gb1.brightbox.com' Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.469 [INFO][4836] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.477 [INFO][4836] ipam/ipam.go 394: Looking up existing affinities for host host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.491 [INFO][4836] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.495 [INFO][4836] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.500 [INFO][4836] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.500 [INFO][4836] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.503 [INFO][4836] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46 Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.513 [INFO][4836] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.526 [INFO][4836] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.200/26] block=192.168.12.192/26 handle="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.526 [INFO][4836] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.200/26] handle="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" host="srv-jqch3.gb1.brightbox.com" Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.527 [INFO][4836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 03:17:18.585067 containerd[1584]: 2026-01-20 03:17:18.527 [INFO][4836] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.200/26] IPv6=[] ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" HandleID="k8s-pod-network.b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Workload="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.588576 containerd[1584]: 2026-01-20 03:17:18.533 [INFO][4806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5646518a-7477-4fd5-b634-ed0d62c37fd4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-bpfg4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif47b255cf02", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:18.588576 containerd[1584]: 2026-01-20 03:17:18.533 [INFO][4806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.200/32] ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.588576 containerd[1584]: 2026-01-20 03:17:18.533 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif47b255cf02 ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.588576 containerd[1584]: 2026-01-20 03:17:18.550 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.588576 containerd[1584]: 2026-01-20 03:17:18.551 [INFO][4806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5646518a-7477-4fd5-b634-ed0d62c37fd4", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 3, 16, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-jqch3.gb1.brightbox.com", ContainerID:"b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46", Pod:"coredns-674b8bbfcf-bpfg4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif47b255cf02", MAC:"46:4b:e2:3a:5b:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 03:17:18.586898 systemd[1]: Started cri-containerd-f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6.scope - libcontainer container f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6. Jan 20 03:17:18.591842 containerd[1584]: 2026-01-20 03:17:18.574 [INFO][4806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" Namespace="kube-system" Pod="coredns-674b8bbfcf-bpfg4" WorkloadEndpoint="srv--jqch3.gb1.brightbox.com-k8s-coredns--674b8bbfcf--bpfg4-eth0" Jan 20 03:17:18.629953 containerd[1584]: time="2026-01-20T03:17:18.629821026Z" level=info msg="connecting to shim b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46" address="unix:///run/containerd/s/97ffaffa4667c6245a5bc71b318711c8699fbe6614d4d39586c4c7005e9c97e0" namespace=k8s.io protocol=ttrpc version=3 Jan 20 03:17:18.689146 systemd[1]: Started cri-containerd-b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46.scope - libcontainer container b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46. Jan 20 03:17:18.804399 containerd[1584]: time="2026-01-20T03:17:18.803318017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nbjcp,Uid:ec36c53c-7c05-428f-8474-ef17694fd900,Namespace:calico-system,Attempt:0,} returns sandbox id \"f6b6df9498e6d08c13d8a49822ba5e9941251005eb69197bdfd466b1d117dba6\"" Jan 20 03:17:18.809324 containerd[1584]: time="2026-01-20T03:17:18.809094546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:17:18.823664 containerd[1584]: time="2026-01-20T03:17:18.823292390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bpfg4,Uid:5646518a-7477-4fd5-b634-ed0d62c37fd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46\"" Jan 20 03:17:18.832330 containerd[1584]: time="2026-01-20T03:17:18.832193570Z" level=info msg="CreateContainer within sandbox \"b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 03:17:18.841766 containerd[1584]: time="2026-01-20T03:17:18.841704855Z" level=info msg="Container bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39: CDI devices from CRI Config.CDIDevices: []" Jan 20 03:17:18.854436 containerd[1584]: time="2026-01-20T03:17:18.854397960Z" level=info msg="CreateContainer within sandbox \"b3e9719e440640f86a64db74961e59613588c5c065440d988a27d1cc25ce1a46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39\"" Jan 20 03:17:18.857068 containerd[1584]: time="2026-01-20T03:17:18.855706061Z" level=info msg="StartContainer for \"bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39\"" Jan 20 03:17:18.857760 containerd[1584]: time="2026-01-20T03:17:18.857711984Z" level=info msg="connecting to shim bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39" address="unix:///run/containerd/s/97ffaffa4667c6245a5bc71b318711c8699fbe6614d4d39586c4c7005e9c97e0" protocol=ttrpc version=3 Jan 20 03:17:18.892888 systemd[1]: Started cri-containerd-bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39.scope - libcontainer container bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39. Jan 20 03:17:18.961793 containerd[1584]: time="2026-01-20T03:17:18.961562443Z" level=info msg="StartContainer for \"bd309f4865f6322fcdb7da6707f57d43fcdd6291d309cf6b26049c2e26dbfa39\" returns successfully" Jan 20 03:17:19.146961 containerd[1584]: time="2026-01-20T03:17:19.146810017Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:19.148703 containerd[1584]: time="2026-01-20T03:17:19.148403192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:17:19.148796 containerd[1584]: time="2026-01-20T03:17:19.148776727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:19.149480 kubelet[2893]: E0120 03:17:19.149095 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:17:19.149480 kubelet[2893]: E0120 03:17:19.149164 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:17:19.149480 kubelet[2893]: E0120 03:17:19.149395 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-226x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:19.151652 kubelet[2893]: E0120 03:17:19.151409 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:17:19.621260 kubelet[2893]: E0120 03:17:19.621186 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:17:19.660914 kubelet[2893]: I0120 03:17:19.660828 2893 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bpfg4" podStartSLOduration=58.660794114 podStartE2EDuration="58.660794114s" podCreationTimestamp="2026-01-20 03:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 03:17:19.657352549 +0000 UTC m=+64.716823597" watchObservedRunningTime="2026-01-20 03:17:19.660794114 +0000 UTC m=+64.720265137" Jan 20 03:17:20.210853 systemd-networkd[1484]: cali4c71a882d9f: Gained IPv6LL Jan 20 03:17:20.530830 systemd-networkd[1484]: calif47b255cf02: Gained IPv6LL Jan 20 03:17:20.636027 kubelet[2893]: E0120 03:17:20.635961 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:17:22.215413 containerd[1584]: time="2026-01-20T03:17:22.215191452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:17:22.543777 containerd[1584]: time="2026-01-20T03:17:22.543459595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:22.545050 containerd[1584]: time="2026-01-20T03:17:22.545001863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:17:22.545178 containerd[1584]: time="2026-01-20T03:17:22.545123424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:17:22.545444 kubelet[2893]: E0120 03:17:22.545383 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:17:22.545964 kubelet[2893]: E0120 03:17:22.545473 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:17:22.546018 kubelet[2893]: E0120 03:17:22.545888 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:05f348d2e5bc42b3908323f8f106888c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:22.546994 containerd[1584]: time="2026-01-20T03:17:22.546688729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:17:22.854469 containerd[1584]: time="2026-01-20T03:17:22.854269033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:22.855992 containerd[1584]: time="2026-01-20T03:17:22.855801580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:17:22.855992 containerd[1584]: time="2026-01-20T03:17:22.855943525Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 03:17:22.856951 kubelet[2893]: E0120 03:17:22.856544 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:17:22.856951 kubelet[2893]: E0120 03:17:22.856682 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:17:22.857201 kubelet[2893]: E0120 03:17:22.857102 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr5mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cb458d8fc-vxcdh_calico-system(f813f3ef-562d-4e92-bd19-fa37c63ad294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:22.857655 containerd[1584]: time="2026-01-20T03:17:22.857521180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:17:22.858491 kubelet[2893]: E0120 03:17:22.858440 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:17:23.162272 containerd[1584]: time="2026-01-20T03:17:23.161803126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:23.163663 containerd[1584]: time="2026-01-20T03:17:23.163616592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:17:23.163749 containerd[1584]: time="2026-01-20T03:17:23.163626905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:17:23.164623 kubelet[2893]: E0120 03:17:23.164168 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:17:23.164623 kubelet[2893]: E0120 03:17:23.164271 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:17:23.164623 kubelet[2893]: E0120 03:17:23.164535 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:23.166711 kubelet[2893]: E0120 03:17:23.166638 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:17:23.216172 containerd[1584]: time="2026-01-20T03:17:23.214943219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:17:23.522057 containerd[1584]: time="2026-01-20T03:17:23.521936894Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:23.523553 containerd[1584]: time="2026-01-20T03:17:23.523387231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:17:23.523553 containerd[1584]: time="2026-01-20T03:17:23.523501495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:23.524073 kubelet[2893]: E0120 03:17:23.524004 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:23.524193 kubelet[2893]: E0120 03:17:23.524108 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:23.524485 kubelet[2893]: E0120 03:17:23.524404 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw8dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-jcg9v_calico-apiserver(2fb46006-e4ca-4a17-9db5-a5327a1b235a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:23.526385 kubelet[2893]: E0120 03:17:23.526336 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:17:24.212244 containerd[1584]: time="2026-01-20T03:17:24.212178940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:17:24.520327 containerd[1584]: time="2026-01-20T03:17:24.520023934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:24.521525 containerd[1584]: time="2026-01-20T03:17:24.521445921Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:17:24.521751 containerd[1584]: time="2026-01-20T03:17:24.521512427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:17:24.522124 kubelet[2893]: E0120 03:17:24.522025 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:17:24.522124 kubelet[2893]: E0120 03:17:24.522097 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:17:24.524509 kubelet[2893]: E0120 03:17:24.524437 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:24.526897 containerd[1584]: time="2026-01-20T03:17:24.526663799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:17:24.829977 containerd[1584]: time="2026-01-20T03:17:24.829685246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:24.832067 containerd[1584]: time="2026-01-20T03:17:24.831872308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:17:24.832067 containerd[1584]: time="2026-01-20T03:17:24.831922384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:17:24.832634 kubelet[2893]: E0120 03:17:24.832504 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:17:24.832954 kubelet[2893]: E0120 03:17:24.832810 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:17:24.833343 kubelet[2893]: E0120 03:17:24.833218 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:24.834540 kubelet[2893]: E0120 03:17:24.834443 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:17:26.211977 containerd[1584]: time="2026-01-20T03:17:26.211644903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:17:26.519972 containerd[1584]: time="2026-01-20T03:17:26.519902123Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:26.521471 containerd[1584]: time="2026-01-20T03:17:26.521412877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:17:26.521563 containerd[1584]: time="2026-01-20T03:17:26.521507155Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:26.522228 kubelet[2893]: E0120 03:17:26.521764 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:26.522228 kubelet[2893]: E0120 03:17:26.521836 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:26.522228 kubelet[2893]: E0120 03:17:26.522063 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68wqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-28rcr_calico-apiserver(b6dc1880-5e6d-4d78-bdb4-990b30c248de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:26.524239 kubelet[2893]: E0120 03:17:26.523617 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:17:32.213093 containerd[1584]: time="2026-01-20T03:17:32.213009000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:17:32.523238 containerd[1584]: time="2026-01-20T03:17:32.523146094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:32.524811 containerd[1584]: time="2026-01-20T03:17:32.524756835Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:17:32.524942 containerd[1584]: time="2026-01-20T03:17:32.524897526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:32.525342 kubelet[2893]: E0120 03:17:32.525176 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:17:32.525342 kubelet[2893]: E0120 03:17:32.525300 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:17:32.526645 kubelet[2893]: E0120 03:17:32.526376 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-226x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:32.527711 kubelet[2893]: E0120 03:17:32.527662 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:17:34.213618 kubelet[2893]: E0120 03:17:34.213322 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:17:38.211774 kubelet[2893]: E0120 03:17:38.211711 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:17:38.212571 kubelet[2893]: E0120 03:17:38.212527 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:17:39.212688 kubelet[2893]: E0120 03:17:39.212282 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:17:39.216250 kubelet[2893]: E0120 03:17:39.216175 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:17:39.727142 systemd[1]: Started sshd@13-10.230.49.118:22-164.92.217.44:38006.service - OpenSSH per-connection server daemon (164.92.217.44:38006). Jan 20 03:17:39.935662 sshd[5045]: Invalid user search from 164.92.217.44 port 38006 Jan 20 03:17:39.964659 sshd[5045]: Connection closed by invalid user search 164.92.217.44 port 38006 [preauth] Jan 20 03:17:39.966700 systemd[1]: sshd@13-10.230.49.118:22-164.92.217.44:38006.service: Deactivated successfully. Jan 20 03:17:42.187070 systemd[1]: Started sshd@14-10.230.49.118:22-20.161.92.111:34790.service - OpenSSH per-connection server daemon (20.161.92.111:34790). Jan 20 03:17:42.788648 sshd[5055]: Accepted publickey for core from 20.161.92.111 port 34790 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:17:42.793188 sshd-session[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:42.807609 systemd-logind[1564]: New session 12 of user core. Jan 20 03:17:42.810830 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 03:17:43.891671 sshd[5058]: Connection closed by 20.161.92.111 port 34790 Jan 20 03:17:43.892180 sshd-session[5055]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:43.910571 systemd[1]: sshd@14-10.230.49.118:22-20.161.92.111:34790.service: Deactivated successfully. Jan 20 03:17:43.916464 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 03:17:43.927755 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Jan 20 03:17:43.930942 systemd-logind[1564]: Removed session 12. Jan 20 03:17:47.211828 kubelet[2893]: E0120 03:17:47.211373 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:17:49.000259 systemd[1]: Started sshd@15-10.230.49.118:22-20.161.92.111:50138.service - OpenSSH per-connection server daemon (20.161.92.111:50138). Jan 20 03:17:49.212843 containerd[1584]: time="2026-01-20T03:17:49.212226565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:17:49.540903 containerd[1584]: time="2026-01-20T03:17:49.540673984Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:49.541830 containerd[1584]: time="2026-01-20T03:17:49.541725719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:17:49.541830 containerd[1584]: time="2026-01-20T03:17:49.541806271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 03:17:49.542256 kubelet[2893]: E0120 03:17:49.542191 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:17:49.543105 kubelet[2893]: E0120 03:17:49.542262 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:17:49.556438 kubelet[2893]: E0120 03:17:49.556286 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr5mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cb458d8fc-vxcdh_calico-system(f813f3ef-562d-4e92-bd19-fa37c63ad294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:49.558104 kubelet[2893]: E0120 03:17:49.558045 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:17:49.622571 sshd[5077]: Accepted publickey for core from 20.161.92.111 port 50138 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:17:49.625469 sshd-session[5077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:49.633440 systemd-logind[1564]: New session 13 of user core. Jan 20 03:17:49.641778 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 03:17:50.160458 sshd[5080]: Connection closed by 20.161.92.111 port 50138 Jan 20 03:17:50.161090 sshd-session[5077]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:50.167674 systemd[1]: sshd@15-10.230.49.118:22-20.161.92.111:50138.service: Deactivated successfully. Jan 20 03:17:50.171659 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 03:17:50.173884 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Jan 20 03:17:50.176977 systemd-logind[1564]: Removed session 13. Jan 20 03:17:50.212149 containerd[1584]: time="2026-01-20T03:17:50.212061254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:17:50.515624 containerd[1584]: time="2026-01-20T03:17:50.515549262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:50.517241 containerd[1584]: time="2026-01-20T03:17:50.517187206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:17:50.517302 containerd[1584]: time="2026-01-20T03:17:50.517280900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:50.517506 kubelet[2893]: E0120 03:17:50.517448 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:50.517648 kubelet[2893]: E0120 03:17:50.517524 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:50.517826 kubelet[2893]: E0120 03:17:50.517761 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68wqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-28rcr_calico-apiserver(b6dc1880-5e6d-4d78-bdb4-990b30c248de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:50.519317 kubelet[2893]: E0120 03:17:50.519183 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:17:51.212172 containerd[1584]: time="2026-01-20T03:17:51.211906547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:17:51.541119 containerd[1584]: time="2026-01-20T03:17:51.541040900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:51.542320 containerd[1584]: time="2026-01-20T03:17:51.542255808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:17:51.542428 containerd[1584]: time="2026-01-20T03:17:51.542357341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:17:51.542715 kubelet[2893]: E0120 03:17:51.542639 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:51.543145 kubelet[2893]: E0120 03:17:51.542724 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:17:51.543145 kubelet[2893]: E0120 03:17:51.542927 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw8dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-jcg9v_calico-apiserver(2fb46006-e4ca-4a17-9db5-a5327a1b235a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:51.544606 kubelet[2893]: E0120 03:17:51.544534 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:17:52.212840 containerd[1584]: time="2026-01-20T03:17:52.212726505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:17:52.562974 containerd[1584]: time="2026-01-20T03:17:52.562736210Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:52.564048 containerd[1584]: time="2026-01-20T03:17:52.563909452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:17:52.564048 containerd[1584]: time="2026-01-20T03:17:52.564002120Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:17:52.564262 kubelet[2893]: E0120 03:17:52.564184 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:17:52.564262 kubelet[2893]: E0120 03:17:52.564255 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:17:52.565143 kubelet[2893]: E0120 03:17:52.564436 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:52.567390 containerd[1584]: time="2026-01-20T03:17:52.567254880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:17:52.879492 containerd[1584]: time="2026-01-20T03:17:52.879209641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:52.880715 containerd[1584]: time="2026-01-20T03:17:52.880666043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:17:52.880934 containerd[1584]: time="2026-01-20T03:17:52.880703851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:17:52.881043 kubelet[2893]: E0120 03:17:52.880985 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:17:52.881179 kubelet[2893]: E0120 03:17:52.881057 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:17:52.882163 kubelet[2893]: E0120 03:17:52.882072 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:52.884392 kubelet[2893]: E0120 03:17:52.884339 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:17:53.214421 containerd[1584]: time="2026-01-20T03:17:53.212864111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:17:53.524469 containerd[1584]: time="2026-01-20T03:17:53.524319837Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:53.525553 containerd[1584]: time="2026-01-20T03:17:53.525500720Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:17:53.525652 containerd[1584]: time="2026-01-20T03:17:53.525620955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:17:53.525916 kubelet[2893]: E0120 03:17:53.525857 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:17:53.526066 kubelet[2893]: E0120 03:17:53.525944 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:17:53.526363 kubelet[2893]: E0120 03:17:53.526165 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:05f348d2e5bc42b3908323f8f106888c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:53.531611 containerd[1584]: time="2026-01-20T03:17:53.531409450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:17:53.843193 containerd[1584]: time="2026-01-20T03:17:53.842775777Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:17:53.845081 containerd[1584]: time="2026-01-20T03:17:53.844929021Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:17:53.845081 containerd[1584]: time="2026-01-20T03:17:53.844943005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:17:53.845526 kubelet[2893]: E0120 03:17:53.845437 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:17:53.846309 kubelet[2893]: E0120 03:17:53.845568 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:17:53.846309 kubelet[2893]: E0120 03:17:53.845809 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:17:53.847069 kubelet[2893]: E0120 03:17:53.846931 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:17:55.263364 systemd[1]: Started sshd@16-10.230.49.118:22-20.161.92.111:54652.service - OpenSSH per-connection server daemon (20.161.92.111:54652). Jan 20 03:17:55.847322 sshd[5102]: Accepted publickey for core from 20.161.92.111 port 54652 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:17:55.849400 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:17:55.857448 systemd-logind[1564]: New session 14 of user core. Jan 20 03:17:55.862856 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 03:17:56.343645 sshd[5105]: Connection closed by 20.161.92.111 port 54652 Jan 20 03:17:56.344502 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Jan 20 03:17:56.349923 systemd[1]: sshd@16-10.230.49.118:22-20.161.92.111:54652.service: Deactivated successfully. Jan 20 03:17:56.353033 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 03:17:56.354504 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Jan 20 03:17:56.356733 systemd-logind[1564]: Removed session 14. Jan 20 03:18:00.212938 containerd[1584]: time="2026-01-20T03:18:00.212854620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:18:00.563227 containerd[1584]: time="2026-01-20T03:18:00.563016730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:00.564262 containerd[1584]: time="2026-01-20T03:18:00.564192847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:18:00.564497 containerd[1584]: time="2026-01-20T03:18:00.564258326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 03:18:00.564850 kubelet[2893]: E0120 03:18:00.564793 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:18:00.565293 kubelet[2893]: E0120 03:18:00.564864 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:18:00.565293 kubelet[2893]: E0120 03:18:00.565064 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-226x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:00.566973 kubelet[2893]: E0120 03:18:00.566896 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:18:01.451895 systemd[1]: Started sshd@17-10.230.49.118:22-20.161.92.111:54658.service - OpenSSH per-connection server daemon (20.161.92.111:54658). Jan 20 03:18:02.045660 sshd[5119]: Accepted publickey for core from 20.161.92.111 port 54658 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:02.047164 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:02.055558 systemd-logind[1564]: New session 15 of user core. Jan 20 03:18:02.067800 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 03:18:02.538634 sshd[5122]: Connection closed by 20.161.92.111 port 54658 Jan 20 03:18:02.539479 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:02.545973 systemd[1]: sshd@17-10.230.49.118:22-20.161.92.111:54658.service: Deactivated successfully. Jan 20 03:18:02.549571 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 03:18:02.552589 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Jan 20 03:18:02.554426 systemd-logind[1564]: Removed session 15. Jan 20 03:18:02.640734 systemd[1]: Started sshd@18-10.230.49.118:22-20.161.92.111:38280.service - OpenSSH per-connection server daemon (20.161.92.111:38280). Jan 20 03:18:03.214193 kubelet[2893]: E0120 03:18:03.213299 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:18:03.214193 kubelet[2893]: E0120 03:18:03.213608 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:18:03.235684 sshd[5135]: Accepted publickey for core from 20.161.92.111 port 38280 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:03.240457 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:03.257340 systemd-logind[1564]: New session 16 of user core. Jan 20 03:18:03.260912 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 03:18:03.800505 sshd[5138]: Connection closed by 20.161.92.111 port 38280 Jan 20 03:18:03.802954 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:03.809896 systemd[1]: sshd@18-10.230.49.118:22-20.161.92.111:38280.service: Deactivated successfully. Jan 20 03:18:03.813833 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 03:18:03.815693 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Jan 20 03:18:03.819343 systemd-logind[1564]: Removed session 16. Jan 20 03:18:03.909554 systemd[1]: Started sshd@19-10.230.49.118:22-20.161.92.111:38290.service - OpenSSH per-connection server daemon (20.161.92.111:38290). Jan 20 03:18:04.526941 sshd[5148]: Accepted publickey for core from 20.161.92.111 port 38290 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:04.529048 sshd-session[5148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:04.538081 systemd-logind[1564]: New session 17 of user core. Jan 20 03:18:04.547860 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 03:18:05.036686 sshd[5151]: Connection closed by 20.161.92.111 port 38290 Jan 20 03:18:05.037933 sshd-session[5148]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:05.044119 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Jan 20 03:18:05.044681 systemd[1]: sshd@19-10.230.49.118:22-20.161.92.111:38290.service: Deactivated successfully. Jan 20 03:18:05.047485 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 03:18:05.050483 systemd-logind[1564]: Removed session 17. Jan 20 03:18:05.217777 kubelet[2893]: E0120 03:18:05.216792 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:18:06.212026 kubelet[2893]: E0120 03:18:06.211869 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:18:07.222643 kubelet[2893]: E0120 03:18:07.219153 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:18:10.141403 systemd[1]: Started sshd@20-10.230.49.118:22-20.161.92.111:38298.service - OpenSSH per-connection server daemon (20.161.92.111:38298). Jan 20 03:18:10.765629 sshd[5196]: Accepted publickey for core from 20.161.92.111 port 38298 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:10.766473 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:10.776224 systemd-logind[1564]: New session 18 of user core. Jan 20 03:18:10.784842 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 03:18:11.147191 systemd[1]: Started sshd@21-10.230.49.118:22-164.92.217.44:58140.service - OpenSSH per-connection server daemon (164.92.217.44:58140). Jan 20 03:18:11.331646 sshd[5207]: Invalid user search from 164.92.217.44 port 58140 Jan 20 03:18:11.348424 sshd[5199]: Connection closed by 20.161.92.111 port 38298 Jan 20 03:18:11.350060 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:11.352068 sshd[5207]: Connection closed by invalid user search 164.92.217.44 port 58140 [preauth] Jan 20 03:18:11.356885 systemd[1]: sshd@21-10.230.49.118:22-164.92.217.44:58140.service: Deactivated successfully. Jan 20 03:18:11.360493 systemd[1]: sshd@20-10.230.49.118:22-20.161.92.111:38298.service: Deactivated successfully. Jan 20 03:18:11.366276 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 03:18:11.369574 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Jan 20 03:18:11.372265 systemd-logind[1564]: Removed session 18. Jan 20 03:18:13.215719 kubelet[2893]: E0120 03:18:13.214187 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:18:16.212970 kubelet[2893]: E0120 03:18:16.211902 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:18:16.212970 kubelet[2893]: E0120 03:18:16.211902 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:18:16.450239 systemd[1]: Started sshd@22-10.230.49.118:22-20.161.92.111:32954.service - OpenSSH per-connection server daemon (20.161.92.111:32954). Jan 20 03:18:17.334396 sshd[5218]: Accepted publickey for core from 20.161.92.111 port 32954 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:17.335394 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:17.342853 systemd-logind[1564]: New session 19 of user core. Jan 20 03:18:17.352843 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 03:18:17.826684 sshd[5221]: Connection closed by 20.161.92.111 port 32954 Jan 20 03:18:17.825888 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:17.830914 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Jan 20 03:18:17.832815 systemd[1]: sshd@22-10.230.49.118:22-20.161.92.111:32954.service: Deactivated successfully. Jan 20 03:18:17.836421 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 03:18:17.839671 systemd-logind[1564]: Removed session 19. Jan 20 03:18:18.213216 kubelet[2893]: E0120 03:18:18.212670 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:18:18.213216 kubelet[2893]: E0120 03:18:18.212850 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:18:21.214684 kubelet[2893]: E0120 03:18:21.214450 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:18:22.929437 systemd[1]: Started sshd@23-10.230.49.118:22-20.161.92.111:46200.service - OpenSSH per-connection server daemon (20.161.92.111:46200). Jan 20 03:18:23.517553 sshd[5237]: Accepted publickey for core from 20.161.92.111 port 46200 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:23.519474 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:23.526985 systemd-logind[1564]: New session 20 of user core. Jan 20 03:18:23.539842 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 03:18:24.009664 sshd[5240]: Connection closed by 20.161.92.111 port 46200 Jan 20 03:18:24.009343 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:24.017151 systemd[1]: sshd@23-10.230.49.118:22-20.161.92.111:46200.service: Deactivated successfully. Jan 20 03:18:24.023209 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 03:18:24.024747 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Jan 20 03:18:24.026806 systemd-logind[1564]: Removed session 20. Jan 20 03:18:27.214694 kubelet[2893]: E0120 03:18:27.214446 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:18:28.211256 kubelet[2893]: E0120 03:18:28.211101 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:18:29.118998 systemd[1]: Started sshd@24-10.230.49.118:22-20.161.92.111:46214.service - OpenSSH per-connection server daemon (20.161.92.111:46214). Jan 20 03:18:29.213772 kubelet[2893]: E0120 03:18:29.213716 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:18:29.703020 sshd[5253]: Accepted publickey for core from 20.161.92.111 port 46214 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:29.705413 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:29.713243 systemd-logind[1564]: New session 21 of user core. Jan 20 03:18:29.725802 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 03:18:30.205670 sshd[5256]: Connection closed by 20.161.92.111 port 46214 Jan 20 03:18:30.206478 sshd-session[5253]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:30.213255 systemd[1]: sshd@24-10.230.49.118:22-20.161.92.111:46214.service: Deactivated successfully. Jan 20 03:18:30.214379 kubelet[2893]: E0120 03:18:30.214321 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:18:30.220309 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 03:18:30.222667 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Jan 20 03:18:30.225445 systemd-logind[1564]: Removed session 21. Jan 20 03:18:30.314756 systemd[1]: Started sshd@25-10.230.49.118:22-20.161.92.111:46216.service - OpenSSH per-connection server daemon (20.161.92.111:46216). Jan 20 03:18:30.903991 sshd[5268]: Accepted publickey for core from 20.161.92.111 port 46216 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:30.905804 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:30.913276 systemd-logind[1564]: New session 22 of user core. Jan 20 03:18:30.920769 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 03:18:31.214880 containerd[1584]: time="2026-01-20T03:18:31.214786834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 03:18:31.536384 containerd[1584]: time="2026-01-20T03:18:31.535771750Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:31.537908 containerd[1584]: time="2026-01-20T03:18:31.537625571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 03:18:31.537908 containerd[1584]: time="2026-01-20T03:18:31.537852644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 03:18:31.538298 kubelet[2893]: E0120 03:18:31.538192 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:18:31.539153 kubelet[2893]: E0120 03:18:31.538404 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 03:18:31.541191 kubelet[2893]: E0120 03:18:31.541107 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xr5mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cb458d8fc-vxcdh_calico-system(f813f3ef-562d-4e92-bd19-fa37c63ad294): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:31.542701 kubelet[2893]: E0120 03:18:31.542656 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:18:31.778636 sshd[5277]: Connection closed by 20.161.92.111 port 46216 Jan 20 03:18:31.786483 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:31.799959 systemd[1]: sshd@25-10.230.49.118:22-20.161.92.111:46216.service: Deactivated successfully. Jan 20 03:18:31.803725 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 03:18:31.807237 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Jan 20 03:18:31.809173 systemd-logind[1564]: Removed session 22. Jan 20 03:18:31.885174 systemd[1]: Started sshd@26-10.230.49.118:22-20.161.92.111:46220.service - OpenSSH per-connection server daemon (20.161.92.111:46220). Jan 20 03:18:32.211681 kubelet[2893]: E0120 03:18:32.211503 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:18:32.497645 sshd[5288]: Accepted publickey for core from 20.161.92.111 port 46220 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:32.499499 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:32.508581 systemd-logind[1564]: New session 23 of user core. Jan 20 03:18:32.517885 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 03:18:33.743620 sshd[5291]: Connection closed by 20.161.92.111 port 46220 Jan 20 03:18:33.743861 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:33.750704 systemd[1]: sshd@26-10.230.49.118:22-20.161.92.111:46220.service: Deactivated successfully. Jan 20 03:18:33.754395 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 03:18:33.757871 systemd-logind[1564]: Session 23 logged out. Waiting for processes to exit. Jan 20 03:18:33.759861 systemd-logind[1564]: Removed session 23. Jan 20 03:18:33.846750 systemd[1]: Started sshd@27-10.230.49.118:22-20.161.92.111:38594.service - OpenSSH per-connection server daemon (20.161.92.111:38594). Jan 20 03:18:34.441134 sshd[5308]: Accepted publickey for core from 20.161.92.111 port 38594 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:34.443754 sshd-session[5308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:34.451567 systemd-logind[1564]: New session 24 of user core. Jan 20 03:18:34.466798 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 03:18:35.137852 sshd[5311]: Connection closed by 20.161.92.111 port 38594 Jan 20 03:18:35.138300 sshd-session[5308]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:35.145505 systemd[1]: sshd@27-10.230.49.118:22-20.161.92.111:38594.service: Deactivated successfully. Jan 20 03:18:35.149334 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 03:18:35.151845 systemd-logind[1564]: Session 24 logged out. Waiting for processes to exit. Jan 20 03:18:35.153840 systemd-logind[1564]: Removed session 24. Jan 20 03:18:35.245603 systemd[1]: Started sshd@28-10.230.49.118:22-20.161.92.111:38600.service - OpenSSH per-connection server daemon (20.161.92.111:38600). Jan 20 03:18:35.833638 sshd[5320]: Accepted publickey for core from 20.161.92.111 port 38600 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:35.834988 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:35.842456 systemd-logind[1564]: New session 25 of user core. Jan 20 03:18:35.854808 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 03:18:36.346339 sshd[5323]: Connection closed by 20.161.92.111 port 38600 Jan 20 03:18:36.346911 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:36.353400 systemd[1]: sshd@28-10.230.49.118:22-20.161.92.111:38600.service: Deactivated successfully. Jan 20 03:18:36.356727 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 03:18:36.359380 systemd-logind[1564]: Session 25 logged out. Waiting for processes to exit. Jan 20 03:18:36.362502 systemd-logind[1564]: Removed session 25. Jan 20 03:18:40.212283 containerd[1584]: time="2026-01-20T03:18:40.212187565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 03:18:40.523122 containerd[1584]: time="2026-01-20T03:18:40.523069007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:40.524758 containerd[1584]: time="2026-01-20T03:18:40.524675536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 03:18:40.524758 containerd[1584]: time="2026-01-20T03:18:40.524725976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 03:18:40.525105 kubelet[2893]: E0120 03:18:40.525050 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:18:40.526788 kubelet[2893]: E0120 03:18:40.525493 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 03:18:40.526788 kubelet[2893]: E0120 03:18:40.526305 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:05f348d2e5bc42b3908323f8f106888c,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:40.534930 containerd[1584]: time="2026-01-20T03:18:40.534876052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 03:18:40.849683 containerd[1584]: time="2026-01-20T03:18:40.849090195Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:40.850429 containerd[1584]: time="2026-01-20T03:18:40.850376542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 03:18:40.850518 containerd[1584]: time="2026-01-20T03:18:40.850497729Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 03:18:40.850909 kubelet[2893]: E0120 03:18:40.850766 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:18:40.851107 kubelet[2893]: E0120 03:18:40.850953 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 03:18:40.851492 kubelet[2893]: E0120 03:18:40.851182 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24r9f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-675856cf68-jvm86_calico-system(372633f3-4d42-411f-aa34-da8a913ea6df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:40.852620 kubelet[2893]: E0120 03:18:40.852535 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:18:41.451935 systemd[1]: Started sshd@29-10.230.49.118:22-20.161.92.111:38606.service - OpenSSH per-connection server daemon (20.161.92.111:38606). Jan 20 03:18:42.037917 sshd[5371]: Accepted publickey for core from 20.161.92.111 port 38606 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:42.040242 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:42.047690 systemd-logind[1564]: New session 26 of user core. Jan 20 03:18:42.059844 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 03:18:42.212060 containerd[1584]: time="2026-01-20T03:18:42.211999746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:18:42.540355 sshd[5374]: Connection closed by 20.161.92.111 port 38606 Jan 20 03:18:42.540884 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:42.546088 systemd-logind[1564]: Session 26 logged out. Waiting for processes to exit. Jan 20 03:18:42.546242 systemd[1]: sshd@29-10.230.49.118:22-20.161.92.111:38606.service: Deactivated successfully. Jan 20 03:18:42.549228 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 03:18:42.552931 systemd-logind[1564]: Removed session 26. Jan 20 03:18:42.735377 containerd[1584]: time="2026-01-20T03:18:42.735274064Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:42.736709 containerd[1584]: time="2026-01-20T03:18:42.736624182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:18:42.736709 containerd[1584]: time="2026-01-20T03:18:42.736671049Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:18:42.737327 kubelet[2893]: E0120 03:18:42.737211 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:18:42.737327 kubelet[2893]: E0120 03:18:42.737282 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:18:42.739080 kubelet[2893]: E0120 03:18:42.738029 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw8dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-jcg9v_calico-apiserver(2fb46006-e4ca-4a17-9db5-a5327a1b235a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:42.739361 kubelet[2893]: E0120 03:18:42.739326 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:18:43.213861 containerd[1584]: time="2026-01-20T03:18:43.213464817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 03:18:43.740708 containerd[1584]: time="2026-01-20T03:18:43.740656599Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:43.742160 containerd[1584]: time="2026-01-20T03:18:43.742041585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 03:18:43.742160 containerd[1584]: time="2026-01-20T03:18:43.742120221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 03:18:43.742421 kubelet[2893]: E0120 03:18:43.742359 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:18:43.744126 kubelet[2893]: E0120 03:18:43.742436 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 03:18:43.744126 kubelet[2893]: E0120 03:18:43.742653 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-226x7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nbjcp_calico-system(ec36c53c-7c05-428f-8474-ef17694fd900): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:43.745017 kubelet[2893]: E0120 03:18:43.744409 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900" Jan 20 03:18:44.056952 systemd[1]: Started sshd@30-10.230.49.118:22-164.92.217.44:50886.service - OpenSSH per-connection server daemon (164.92.217.44:50886). Jan 20 03:18:44.183490 sshd[5398]: Invalid user search from 164.92.217.44 port 50886 Jan 20 03:18:44.201702 sshd[5398]: Connection closed by invalid user search 164.92.217.44 port 50886 [preauth] Jan 20 03:18:44.206816 systemd[1]: sshd@30-10.230.49.118:22-164.92.217.44:50886.service: Deactivated successfully. Jan 20 03:18:45.216357 containerd[1584]: time="2026-01-20T03:18:45.215145627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 03:18:45.525831 containerd[1584]: time="2026-01-20T03:18:45.525661181Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:45.528615 containerd[1584]: time="2026-01-20T03:18:45.527799746Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 03:18:45.528615 containerd[1584]: time="2026-01-20T03:18:45.527898101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 03:18:45.528739 kubelet[2893]: E0120 03:18:45.528203 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:18:45.528739 kubelet[2893]: E0120 03:18:45.528288 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 03:18:45.529227 kubelet[2893]: E0120 03:18:45.529009 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68wqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-747897bbb-28rcr_calico-apiserver(b6dc1880-5e6d-4d78-bdb4-990b30c248de): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:45.530240 kubelet[2893]: E0120 03:18:45.530199 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:18:46.212918 kubelet[2893]: E0120 03:18:46.212851 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:18:46.213904 containerd[1584]: time="2026-01-20T03:18:46.213635546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 03:18:46.519128 containerd[1584]: time="2026-01-20T03:18:46.519054244Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:46.520819 containerd[1584]: time="2026-01-20T03:18:46.520448766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 03:18:46.521982 kubelet[2893]: E0120 03:18:46.520908 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:18:46.521982 kubelet[2893]: E0120 03:18:46.521719 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 03:18:46.523819 kubelet[2893]: E0120 03:18:46.522223 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:46.561648 containerd[1584]: time="2026-01-20T03:18:46.520506027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 03:18:46.561935 containerd[1584]: time="2026-01-20T03:18:46.524949195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 03:18:46.870741 containerd[1584]: time="2026-01-20T03:18:46.869814295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 03:18:46.872317 containerd[1584]: time="2026-01-20T03:18:46.872259474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 03:18:46.872512 containerd[1584]: time="2026-01-20T03:18:46.872434365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 03:18:46.873045 kubelet[2893]: E0120 03:18:46.872993 2893 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:18:46.873506 kubelet[2893]: E0120 03:18:46.873073 2893 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 03:18:46.873506 kubelet[2893]: E0120 03:18:46.873243 2893 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nmsdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vfgpq_calico-system(4233551d-98b7-48f5-b9e1-45373c718e78): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 03:18:46.874754 kubelet[2893]: E0120 03:18:46.874695 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vfgpq" podUID="4233551d-98b7-48f5-b9e1-45373c718e78" Jan 20 03:18:47.650771 systemd[1]: Started sshd@31-10.230.49.118:22-20.161.92.111:60924.service - OpenSSH per-connection server daemon (20.161.92.111:60924). Jan 20 03:18:48.297149 sshd[5404]: Accepted publickey for core from 20.161.92.111 port 60924 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:48.299618 sshd-session[5404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:48.310477 systemd-logind[1564]: New session 27 of user core. Jan 20 03:18:48.317007 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 03:18:48.849633 sshd[5409]: Connection closed by 20.161.92.111 port 60924 Jan 20 03:18:48.848946 sshd-session[5404]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:48.856933 systemd[1]: sshd@31-10.230.49.118:22-20.161.92.111:60924.service: Deactivated successfully. Jan 20 03:18:48.860449 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 03:18:48.863623 systemd-logind[1564]: Session 27 logged out. Waiting for processes to exit. Jan 20 03:18:48.866170 systemd-logind[1564]: Removed session 27. Jan 20 03:18:53.214046 kubelet[2893]: E0120 03:18:53.213952 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-675856cf68-jvm86" podUID="372633f3-4d42-411f-aa34-da8a913ea6df" Jan 20 03:18:53.961665 systemd[1]: Started sshd@32-10.230.49.118:22-20.161.92.111:33632.service - OpenSSH per-connection server daemon (20.161.92.111:33632). Jan 20 03:18:54.555640 sshd[5423]: Accepted publickey for core from 20.161.92.111 port 33632 ssh2: RSA SHA256:lPPEIkw/VsOjcI9vSZ/WjrhQt89owPMo1rYgBF+MQt0 Jan 20 03:18:54.557157 sshd-session[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 03:18:54.564252 systemd-logind[1564]: New session 28 of user core. Jan 20 03:18:54.572810 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 03:18:55.093337 sshd[5426]: Connection closed by 20.161.92.111 port 33632 Jan 20 03:18:55.096231 sshd-session[5423]: pam_unix(sshd:session): session closed for user core Jan 20 03:18:55.104855 systemd[1]: sshd@32-10.230.49.118:22-20.161.92.111:33632.service: Deactivated successfully. Jan 20 03:18:55.110912 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 03:18:55.112956 systemd-logind[1564]: Session 28 logged out. Waiting for processes to exit. Jan 20 03:18:55.117809 systemd-logind[1564]: Removed session 28. Jan 20 03:18:56.213005 kubelet[2893]: E0120 03:18:56.212825 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-28rcr" podUID="b6dc1880-5e6d-4d78-bdb4-990b30c248de" Jan 20 03:18:57.215611 kubelet[2893]: E0120 03:18:57.214810 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-cb458d8fc-vxcdh" podUID="f813f3ef-562d-4e92-bd19-fa37c63ad294" Jan 20 03:18:57.215611 kubelet[2893]: E0120 03:18:57.215368 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-747897bbb-jcg9v" podUID="2fb46006-e4ca-4a17-9db5-a5327a1b235a" Jan 20 03:18:58.211425 kubelet[2893]: E0120 03:18:58.211352 2893 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nbjcp" podUID="ec36c53c-7c05-428f-8474-ef17694fd900"