Jan 20 01:41:26.046064 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 22:42:14 -00 2026 Jan 20 01:41:26.046105 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 01:41:26.046120 kernel: BIOS-provided physical RAM map: Jan 20 01:41:26.046135 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 20 01:41:26.046145 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 20 01:41:26.046157 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 20 01:41:26.046169 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 20 01:41:26.046180 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 20 01:41:26.046190 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 20 01:41:26.046201 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 20 01:41:26.046217 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 20 01:41:26.046240 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 20 01:41:26.046255 kernel: NX (Execute Disable) protection: active Jan 20 01:41:26.046266 kernel: APIC: Static calls initialized Jan 20 01:41:26.046279 kernel: SMBIOS 2.8 present. Jan 20 01:41:26.046303 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 20 01:41:26.046315 kernel: Hypervisor detected: KVM Jan 20 01:41:26.046331 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 20 01:41:26.046343 kernel: kvm-clock: using sched offset of 4425759900 cycles Jan 20 01:41:26.046355 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 20 01:41:26.046367 kernel: tsc: Detected 2500.032 MHz processor Jan 20 01:41:26.046379 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 20 01:41:26.046391 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 20 01:41:26.046403 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 20 01:41:26.046414 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 20 01:41:26.046426 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 20 01:41:26.046449 kernel: Using GB pages for direct mapping Jan 20 01:41:26.046461 kernel: ACPI: Early table checksum verification disabled Jan 20 01:41:26.046473 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 20 01:41:26.046485 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046497 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046511 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046523 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 20 01:41:26.046535 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046547 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046563 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046575 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 01:41:26.046596 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 20 01:41:26.046607 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 20 01:41:26.046632 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 20 01:41:26.046652 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 20 01:41:26.046665 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 20 01:41:26.046687 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 20 01:41:26.046700 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 20 01:41:26.046712 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 20 01:41:26.046735 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 20 01:41:26.046747 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 20 01:41:26.046759 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 20 01:41:26.046771 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 20 01:41:26.046792 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 20 01:41:26.046804 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 20 01:41:26.046816 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 20 01:41:26.046828 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 20 01:41:26.046840 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 20 01:41:26.046852 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 20 01:41:26.046864 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 20 01:41:26.046876 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 20 01:41:26.046888 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 20 01:41:26.046900 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 20 01:41:26.046917 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 20 01:41:26.046930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 20 01:41:26.046942 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 20 01:41:26.046954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 20 01:41:26.046966 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 20 01:41:26.046979 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 20 01:41:26.046991 kernel: Zone ranges: Jan 20 01:41:26.047003 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 20 01:41:26.047015 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 20 01:41:26.047034 kernel: Normal empty Jan 20 01:41:26.047047 kernel: Movable zone start for each node Jan 20 01:41:26.047059 kernel: Early memory node ranges Jan 20 01:41:26.047071 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 20 01:41:26.047087 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 20 01:41:26.047099 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 20 01:41:26.047582 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 20 01:41:26.047614 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 20 01:41:26.047627 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 20 01:41:26.047656 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 20 01:41:26.047685 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 20 01:41:26.047698 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 20 01:41:26.047710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 20 01:41:26.047737 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 20 01:41:26.047751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 20 01:41:26.047763 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 20 01:41:26.047775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 20 01:41:26.047787 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 20 01:41:26.047799 kernel: TSC deadline timer available Jan 20 01:41:26.047817 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 20 01:41:26.047830 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 20 01:41:26.047842 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 20 01:41:26.047855 kernel: Booting paravirtualized kernel on KVM Jan 20 01:41:26.047867 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 20 01:41:26.047880 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 20 01:41:26.047892 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 20 01:41:26.047905 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 20 01:41:26.047917 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 20 01:41:26.047934 kernel: kvm-guest: PV spinlocks enabled Jan 20 01:41:26.047947 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 20 01:41:26.049621 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 01:41:26.049644 kernel: random: crng init done Jan 20 01:41:26.049657 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 01:41:26.049670 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 01:41:26.049683 kernel: Fallback order for Node 0: 0 Jan 20 01:41:26.049695 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 20 01:41:26.049715 kernel: Policy zone: DMA32 Jan 20 01:41:26.049738 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 01:41:26.049752 kernel: software IO TLB: area num 16. Jan 20 01:41:26.049764 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42880K init, 2316K bss, 194764K reserved, 0K cma-reserved) Jan 20 01:41:26.049777 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 20 01:41:26.049789 kernel: Kernel/User page tables isolation: enabled Jan 20 01:41:26.049801 kernel: ftrace: allocating 37989 entries in 149 pages Jan 20 01:41:26.049813 kernel: ftrace: allocated 149 pages with 4 groups Jan 20 01:41:26.049826 kernel: Dynamic Preempt: voluntary Jan 20 01:41:26.049844 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 01:41:26.049857 kernel: rcu: RCU event tracing is enabled. Jan 20 01:41:26.049869 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 20 01:41:26.049882 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 01:41:26.049894 kernel: Rude variant of Tasks RCU enabled. Jan 20 01:41:26.049919 kernel: Tracing variant of Tasks RCU enabled. Jan 20 01:41:26.049936 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 01:41:26.049949 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 20 01:41:26.049962 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 20 01:41:26.049975 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 01:41:26.049988 kernel: Console: colour VGA+ 80x25 Jan 20 01:41:26.050001 kernel: printk: console [tty0] enabled Jan 20 01:41:26.050019 kernel: printk: console [ttyS0] enabled Jan 20 01:41:26.050032 kernel: ACPI: Core revision 20230628 Jan 20 01:41:26.050045 kernel: APIC: Switch to symmetric I/O mode setup Jan 20 01:41:26.050058 kernel: x2apic enabled Jan 20 01:41:26.050070 kernel: APIC: Switched APIC routing to: physical x2apic Jan 20 01:41:26.050088 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 20 01:41:26.050101 kernel: Calibrating delay loop (skipped) preset value.. 5000.06 BogoMIPS (lpj=2500032) Jan 20 01:41:26.050114 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 20 01:41:26.050127 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 20 01:41:26.050140 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 20 01:41:26.050153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 20 01:41:26.050170 kernel: Spectre V2 : Mitigation: Retpolines Jan 20 01:41:26.050183 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 20 01:41:26.050196 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 20 01:41:26.050214 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 20 01:41:26.050227 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 20 01:41:26.050240 kernel: MDS: Mitigation: Clear CPU buffers Jan 20 01:41:26.050253 kernel: MMIO Stale Data: Unknown: No mitigations Jan 20 01:41:26.050265 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 20 01:41:26.050278 kernel: active return thunk: its_return_thunk Jan 20 01:41:26.050296 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 20 01:41:26.050309 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 20 01:41:26.050322 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 20 01:41:26.050334 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 20 01:41:26.050347 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 20 01:41:26.050368 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 20 01:41:26.050382 kernel: Freeing SMP alternatives memory: 32K Jan 20 01:41:26.050395 kernel: pid_max: default: 32768 minimum: 301 Jan 20 01:41:26.050407 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 20 01:41:26.050420 kernel: landlock: Up and running. Jan 20 01:41:26.050433 kernel: SELinux: Initializing. Jan 20 01:41:26.050458 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 01:41:26.050470 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 20 01:41:26.050483 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 20 01:41:26.050495 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 01:41:26.050508 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 01:41:26.050525 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 20 01:41:26.050538 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 20 01:41:26.050550 kernel: signal: max sigframe size: 1776 Jan 20 01:41:26.050575 kernel: rcu: Hierarchical SRCU implementation. Jan 20 01:41:26.050588 kernel: rcu: Max phase no-delay instances is 400. Jan 20 01:41:26.050601 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 20 01:41:26.050614 kernel: smp: Bringing up secondary CPUs ... Jan 20 01:41:26.050652 kernel: smpboot: x86: Booting SMP configuration: Jan 20 01:41:26.050665 kernel: .... node #0, CPUs: #1 Jan 20 01:41:26.050684 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 20 01:41:26.050698 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 01:41:26.050711 kernel: smpboot: Max logical packages: 16 Jan 20 01:41:26.050737 kernel: smpboot: Total of 2 processors activated (10000.12 BogoMIPS) Jan 20 01:41:26.050750 kernel: devtmpfs: initialized Jan 20 01:41:26.050763 kernel: x86/mm: Memory block size: 128MB Jan 20 01:41:26.050776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 01:41:26.050789 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 20 01:41:26.050802 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 01:41:26.050820 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 01:41:26.050834 kernel: audit: initializing netlink subsys (disabled) Jan 20 01:41:26.050847 kernel: audit: type=2000 audit(1768873284.764:1): state=initialized audit_enabled=0 res=1 Jan 20 01:41:26.050859 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 01:41:26.050872 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 20 01:41:26.050885 kernel: cpuidle: using governor menu Jan 20 01:41:26.050898 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 01:41:26.050910 kernel: dca service started, version 1.12.1 Jan 20 01:41:26.050923 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 20 01:41:26.050941 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 20 01:41:26.050954 kernel: PCI: Using configuration type 1 for base access Jan 20 01:41:26.050967 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 20 01:41:26.050980 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 01:41:26.050994 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 01:41:26.051006 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 01:41:26.051019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 01:41:26.051032 kernel: ACPI: Added _OSI(Module Device) Jan 20 01:41:26.051045 kernel: ACPI: Added _OSI(Processor Device) Jan 20 01:41:26.051063 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 01:41:26.051076 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 01:41:26.051089 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 20 01:41:26.051102 kernel: ACPI: Interpreter enabled Jan 20 01:41:26.051115 kernel: ACPI: PM: (supports S0 S5) Jan 20 01:41:26.051128 kernel: ACPI: Using IOAPIC for interrupt routing Jan 20 01:41:26.051141 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 20 01:41:26.051154 kernel: PCI: Using E820 reservations for host bridge windows Jan 20 01:41:26.051166 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 20 01:41:26.051184 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 01:41:26.051506 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 01:41:26.052841 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 20 01:41:26.053021 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 20 01:41:26.053042 kernel: PCI host bridge to bus 0000:00 Jan 20 01:41:26.053232 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 20 01:41:26.053390 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 20 01:41:26.053564 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 20 01:41:26.053753 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 20 01:41:26.053905 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 20 01:41:26.054057 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 20 01:41:26.054208 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 01:41:26.054420 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 20 01:41:26.056810 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 20 01:41:26.057002 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 20 01:41:26.057186 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 20 01:41:26.057356 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 20 01:41:26.057538 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 20 01:41:26.057823 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.058001 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 20 01:41:26.058223 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.058408 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 20 01:41:26.059300 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.059504 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 20 01:41:26.059786 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.059963 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 20 01:41:26.060178 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.060361 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 20 01:41:26.060552 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.060829 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 20 01:41:26.061017 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.061186 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 20 01:41:26.061371 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 20 01:41:26.061540 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 20 01:41:26.061775 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 20 01:41:26.061973 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 20 01:41:26.062149 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 20 01:41:26.062339 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 20 01:41:26.062512 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 20 01:41:26.064806 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 20 01:41:26.064991 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 20 01:41:26.065164 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 20 01:41:26.065336 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 20 01:41:26.065513 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 20 01:41:26.065717 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 20 01:41:26.065928 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 20 01:41:26.066108 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 20 01:41:26.066273 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 20 01:41:26.066463 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 20 01:41:26.068681 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 20 01:41:26.068895 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 20 01:41:26.069090 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 20 01:41:26.069289 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 01:41:26.069462 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 20 01:41:26.069647 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 01:41:26.069861 kernel: pci_bus 0000:02: extended config space not accessible Jan 20 01:41:26.070054 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 20 01:41:26.070248 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 20 01:41:26.070448 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 01:41:26.073766 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 20 01:41:26.073987 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 20 01:41:26.074186 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 20 01:41:26.074374 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 01:41:26.074548 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 20 01:41:26.075777 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 01:41:26.075987 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 20 01:41:26.076171 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 20 01:41:26.076367 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 01:41:26.076544 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 20 01:41:26.079804 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 01:41:26.079992 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 01:41:26.080174 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 20 01:41:26.080364 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 01:41:26.080567 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 01:41:26.081804 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 20 01:41:26.081981 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 01:41:26.082159 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 01:41:26.082338 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 20 01:41:26.082508 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 01:41:26.084988 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 01:41:26.085176 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 20 01:41:26.085368 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 01:41:26.085546 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 01:41:26.085765 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 20 01:41:26.085940 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 01:41:26.085961 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 20 01:41:26.085975 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 20 01:41:26.085989 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 20 01:41:26.086002 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 20 01:41:26.086023 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 20 01:41:26.086037 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 20 01:41:26.086059 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 20 01:41:26.086072 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 20 01:41:26.086086 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 20 01:41:26.086099 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 20 01:41:26.086112 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 20 01:41:26.086125 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 20 01:41:26.086138 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 20 01:41:26.086156 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 20 01:41:26.086177 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 20 01:41:26.086190 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 20 01:41:26.086204 kernel: iommu: Default domain type: Translated Jan 20 01:41:26.086217 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 20 01:41:26.086239 kernel: PCI: Using ACPI for IRQ routing Jan 20 01:41:26.086252 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 20 01:41:26.086265 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 20 01:41:26.086278 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 20 01:41:26.086467 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 20 01:41:26.086667 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 20 01:41:26.086854 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 20 01:41:26.086874 kernel: vgaarb: loaded Jan 20 01:41:26.086888 kernel: clocksource: Switched to clocksource kvm-clock Jan 20 01:41:26.086902 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 01:41:26.086915 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 01:41:26.086928 kernel: pnp: PnP ACPI init Jan 20 01:41:26.087134 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 20 01:41:26.087170 kernel: pnp: PnP ACPI: found 5 devices Jan 20 01:41:26.087183 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 20 01:41:26.087197 kernel: NET: Registered PF_INET protocol family Jan 20 01:41:26.087210 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 01:41:26.087223 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 20 01:41:26.087236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 01:41:26.087249 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 20 01:41:26.087263 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 20 01:41:26.087281 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 20 01:41:26.087295 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 01:41:26.087308 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 20 01:41:26.087321 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 01:41:26.087334 kernel: NET: Registered PF_XDP protocol family Jan 20 01:41:26.087509 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 20 01:41:26.087734 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 20 01:41:26.087915 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 20 01:41:26.088109 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 20 01:41:26.088302 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 20 01:41:26.088481 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 20 01:41:26.088727 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 20 01:41:26.088911 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 20 01:41:26.089100 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 20 01:41:26.089288 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 20 01:41:26.089482 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 20 01:41:26.089717 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 20 01:41:26.089905 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 20 01:41:26.090073 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 20 01:41:26.090240 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 20 01:41:26.090407 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 20 01:41:26.090625 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 20 01:41:26.090849 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 20 01:41:26.091019 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 20 01:41:26.091231 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 20 01:41:26.091435 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 20 01:41:26.091671 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 01:41:26.091861 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 20 01:41:26.092037 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 20 01:41:26.092215 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 20 01:41:26.092396 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 01:41:26.092564 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 20 01:41:26.092785 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 20 01:41:26.092957 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 20 01:41:26.093148 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 01:41:26.093334 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 20 01:41:26.093526 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 20 01:41:26.093754 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 20 01:41:26.093924 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 01:41:26.094102 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 20 01:41:26.094270 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 20 01:41:26.094444 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 20 01:41:26.094641 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 01:41:26.094829 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 20 01:41:26.094999 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 20 01:41:26.095175 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 20 01:41:26.095344 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 01:41:26.095512 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 20 01:41:26.095796 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 20 01:41:26.095967 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 20 01:41:26.096145 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 01:41:26.096315 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 20 01:41:26.096483 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 20 01:41:26.096679 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 20 01:41:26.096881 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 01:41:26.097044 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 20 01:41:26.097203 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 20 01:41:26.097358 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 20 01:41:26.097511 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 20 01:41:26.097715 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 20 01:41:26.097895 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 20 01:41:26.098084 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 20 01:41:26.098269 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 20 01:41:26.098454 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 20 01:41:26.098737 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 20 01:41:26.098934 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 20 01:41:26.099095 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 20 01:41:26.099263 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 20 01:41:26.099443 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 20 01:41:26.099648 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 20 01:41:26.099830 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 20 01:41:26.100005 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 20 01:41:26.100188 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 20 01:41:26.100357 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 20 01:41:26.100580 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 20 01:41:26.100821 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 20 01:41:26.100982 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 20 01:41:26.101149 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 20 01:41:26.101314 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 20 01:41:26.101482 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 20 01:41:26.101688 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 20 01:41:26.101864 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 20 01:41:26.102025 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 20 01:41:26.102210 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 20 01:41:26.102378 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 20 01:41:26.102537 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 20 01:41:26.102566 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 20 01:41:26.102581 kernel: PCI: CLS 0 bytes, default 64 Jan 20 01:41:26.104647 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 20 01:41:26.104667 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 20 01:41:26.104681 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 20 01:41:26.104695 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240957bf147, max_idle_ns: 440795216753 ns Jan 20 01:41:26.104714 kernel: Initialise system trusted keyrings Jan 20 01:41:26.104740 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 20 01:41:26.104762 kernel: Key type asymmetric registered Jan 20 01:41:26.104776 kernel: Asymmetric key parser 'x509' registered Jan 20 01:41:26.104789 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 20 01:41:26.104803 kernel: io scheduler mq-deadline registered Jan 20 01:41:26.104817 kernel: io scheduler kyber registered Jan 20 01:41:26.104831 kernel: io scheduler bfq registered Jan 20 01:41:26.105013 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 20 01:41:26.105190 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 20 01:41:26.105362 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.105545 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 20 01:41:26.105766 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 20 01:41:26.105937 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.106119 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 20 01:41:26.106290 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 20 01:41:26.106461 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.108689 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 20 01:41:26.108877 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 20 01:41:26.109047 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.109220 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 20 01:41:26.109396 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 20 01:41:26.109564 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.109780 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 20 01:41:26.109951 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 20 01:41:26.110120 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.110290 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 20 01:41:26.110458 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 20 01:41:26.112667 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.112876 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 20 01:41:26.113076 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 20 01:41:26.113259 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 01:41:26.113281 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 20 01:41:26.113296 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 20 01:41:26.113319 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 20 01:41:26.113341 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 01:41:26.113356 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 20 01:41:26.113370 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 20 01:41:26.113384 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 20 01:41:26.113397 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 20 01:41:26.113411 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 20 01:41:26.113640 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 20 01:41:26.113848 kernel: rtc_cmos 00:03: registered as rtc0 Jan 20 01:41:26.114156 kernel: rtc_cmos 00:03: setting system clock to 2026-01-20T01:41:25 UTC (1768873285) Jan 20 01:41:26.114402 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 20 01:41:26.114422 kernel: intel_pstate: CPU model not supported Jan 20 01:41:26.114437 kernel: NET: Registered PF_INET6 protocol family Jan 20 01:41:26.114450 kernel: Segment Routing with IPv6 Jan 20 01:41:26.114464 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 01:41:26.114478 kernel: NET: Registered PF_PACKET protocol family Jan 20 01:41:26.114492 kernel: Key type dns_resolver registered Jan 20 01:41:26.114506 kernel: IPI shorthand broadcast: enabled Jan 20 01:41:26.114527 kernel: sched_clock: Marking stable (1278003834, 229536785)->(1629458487, -121917868) Jan 20 01:41:26.114541 kernel: registered taskstats version 1 Jan 20 01:41:26.114555 kernel: Loading compiled-in X.509 certificates Jan 20 01:41:26.114568 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: ea2d429b6f340e470c7de035feb011ab349763d1' Jan 20 01:41:26.114582 kernel: Key type .fscrypt registered Jan 20 01:41:26.114596 kernel: Key type fscrypt-provisioning registered Jan 20 01:41:26.115669 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 01:41:26.115696 kernel: ima: Allocated hash algorithm: sha1 Jan 20 01:41:26.115709 kernel: ima: No architecture policies found Jan 20 01:41:26.115749 kernel: clk: Disabling unused clocks Jan 20 01:41:26.115765 kernel: Freeing unused kernel image (initmem) memory: 42880K Jan 20 01:41:26.115779 kernel: Write protecting the kernel read-only data: 36864k Jan 20 01:41:26.115793 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 20 01:41:26.115807 kernel: Run /init as init process Jan 20 01:41:26.115820 kernel: with arguments: Jan 20 01:41:26.115834 kernel: /init Jan 20 01:41:26.115848 kernel: with environment: Jan 20 01:41:26.115861 kernel: HOME=/ Jan 20 01:41:26.115874 kernel: TERM=linux Jan 20 01:41:26.115897 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:41:26.115915 systemd[1]: Detected virtualization kvm. Jan 20 01:41:26.115934 systemd[1]: Detected architecture x86-64. Jan 20 01:41:26.115949 systemd[1]: Running in initrd. Jan 20 01:41:26.115963 systemd[1]: No hostname configured, using default hostname. Jan 20 01:41:26.115977 systemd[1]: Hostname set to . Jan 20 01:41:26.115992 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:41:26.116012 systemd[1]: Queued start job for default target initrd.target. Jan 20 01:41:26.116027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:41:26.116051 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:41:26.116066 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 01:41:26.116081 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:41:26.116096 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 01:41:26.116111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 01:41:26.116133 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 20 01:41:26.116148 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 20 01:41:26.116172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:41:26.116187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:41:26.116201 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:41:26.116216 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:41:26.116231 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:41:26.116246 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:41:26.116268 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:41:26.116295 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:41:26.116315 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 01:41:26.116330 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 20 01:41:26.116344 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:41:26.116358 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:41:26.116372 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:41:26.116386 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:41:26.116405 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 01:41:26.116420 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:41:26.116447 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 01:41:26.116462 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 01:41:26.116477 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:41:26.116491 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:41:26.116506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:41:26.116521 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 01:41:26.116613 systemd-journald[203]: Collecting audit messages is disabled. Jan 20 01:41:26.116653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:41:26.116669 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 01:41:26.116690 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 01:41:26.116712 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 01:41:26.116738 systemd-journald[203]: Journal started Jan 20 01:41:26.116765 systemd-journald[203]: Runtime Journal (/run/log/journal/163d71d3f9584ffe9c75876068c47709) is 4.7M, max 38.0M, 33.2M free. Jan 20 01:41:26.065836 systemd-modules-load[204]: Inserted module 'overlay' Jan 20 01:41:26.162048 kernel: Bridge firewalling registered Jan 20 01:41:26.117411 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 20 01:41:26.166635 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:41:26.168038 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:41:26.171218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:41:26.173393 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 01:41:26.181827 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:41:26.200919 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:41:26.203802 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:41:26.216926 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:41:26.219323 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:41:26.234531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:41:26.237383 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:41:26.243827 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 01:41:26.244903 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:41:26.251893 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:41:26.270624 dracut-cmdline[236]: dracut-dracut-053 Jan 20 01:41:26.272934 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8c5dc1cd4dcc734d9dabe08efcaa33dd0d0e79b2d8f11a958a4b004e775e3441 Jan 20 01:41:26.297017 systemd-resolved[239]: Positive Trust Anchors: Jan 20 01:41:26.297773 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:41:26.297819 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:41:26.306398 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 20 01:41:26.309501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:41:26.311288 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:41:26.373689 kernel: SCSI subsystem initialized Jan 20 01:41:26.385650 kernel: Loading iSCSI transport class v2.0-870. Jan 20 01:41:26.399614 kernel: iscsi: registered transport (tcp) Jan 20 01:41:26.425995 kernel: iscsi: registered transport (qla4xxx) Jan 20 01:41:26.426060 kernel: QLogic iSCSI HBA Driver Jan 20 01:41:26.484556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 01:41:26.490828 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 01:41:26.523907 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 01:41:26.524004 kernel: device-mapper: uevent: version 1.0.3 Jan 20 01:41:26.527188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 20 01:41:26.578647 kernel: raid6: sse2x4 gen() 13479 MB/s Jan 20 01:41:26.593632 kernel: raid6: sse2x2 gen() 8898 MB/s Jan 20 01:41:26.612452 kernel: raid6: sse2x1 gen() 9657 MB/s Jan 20 01:41:26.612515 kernel: raid6: using algorithm sse2x4 gen() 13479 MB/s Jan 20 01:41:26.631500 kernel: raid6: .... xor() 7265 MB/s, rmw enabled Jan 20 01:41:26.631562 kernel: raid6: using ssse3x2 recovery algorithm Jan 20 01:41:26.658678 kernel: xor: automatically using best checksumming function avx Jan 20 01:41:26.857741 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 01:41:26.874766 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:41:26.882848 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:41:26.909654 systemd-udevd[422]: Using default interface naming scheme 'v255'. Jan 20 01:41:26.916815 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:41:26.924415 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 01:41:26.945785 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 20 01:41:26.985483 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:41:26.990829 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:41:27.114590 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:41:27.123812 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 01:41:27.152330 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 01:41:27.156837 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:41:27.157671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:41:27.159760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:41:27.168850 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 01:41:27.199555 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:41:27.244870 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 20 01:41:27.260764 kernel: cryptd: max_cpu_qlen set to 1000 Jan 20 01:41:27.275624 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 20 01:41:27.279834 kernel: libata version 3.00 loaded. Jan 20 01:41:27.291852 kernel: ahci 0000:00:1f.2: version 3.0 Jan 20 01:41:27.292159 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 20 01:41:27.299739 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 20 01:41:27.299992 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 20 01:41:27.298091 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:41:27.298279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:41:27.299267 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:41:27.300703 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:41:27.300873 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:41:27.301616 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:41:27.313919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:41:27.327236 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 01:41:27.327270 kernel: GPT:17805311 != 125829119 Jan 20 01:41:27.327289 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 01:41:27.327306 kernel: GPT:17805311 != 125829119 Jan 20 01:41:27.327337 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 01:41:27.327356 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:41:27.330743 kernel: scsi host0: ahci Jan 20 01:41:27.331004 kernel: ACPI: bus type USB registered Jan 20 01:41:27.336630 kernel: usbcore: registered new interface driver usbfs Jan 20 01:41:27.336668 kernel: usbcore: registered new interface driver hub Jan 20 01:41:27.337655 kernel: usbcore: registered new device driver usb Jan 20 01:41:27.340613 kernel: scsi host1: ahci Jan 20 01:41:27.349621 kernel: AVX version of gcm_enc/dec engaged. Jan 20 01:41:27.360627 kernel: scsi host2: ahci Jan 20 01:41:27.361626 kernel: scsi host3: ahci Jan 20 01:41:27.364631 kernel: scsi host4: ahci Jan 20 01:41:27.366622 kernel: scsi host5: ahci Jan 20 01:41:27.366882 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 20 01:41:27.366906 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 20 01:41:27.366934 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 20 01:41:27.366953 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 20 01:41:27.366971 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 20 01:41:27.366988 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 20 01:41:27.375577 kernel: BTRFS: device fsid ea39c6ab-04c2-4917-8268-943d4ecb2b5c devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (464) Jan 20 01:41:27.375688 kernel: AES CTR mode by8 optimization enabled Jan 20 01:41:27.400919 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 01:41:27.484774 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (471) Jan 20 01:41:27.484024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:41:27.497451 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 01:41:27.504660 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:41:27.510461 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 01:41:27.511330 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 20 01:41:27.518855 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 01:41:27.523797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 01:41:27.528114 disk-uuid[557]: Primary Header is updated. Jan 20 01:41:27.528114 disk-uuid[557]: Secondary Entries is updated. Jan 20 01:41:27.528114 disk-uuid[557]: Secondary Header is updated. Jan 20 01:41:27.542615 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:41:27.548654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:41:27.551251 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:41:27.678623 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 20 01:41:27.683162 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 20 01:41:27.683208 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 20 01:41:27.684916 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 20 01:41:27.686638 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 20 01:41:27.688627 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 20 01:41:27.711620 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 20 01:41:27.716649 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 20 01:41:27.723904 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 20 01:41:27.727752 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 20 01:41:27.728019 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 20 01:41:27.728245 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 20 01:41:27.732890 kernel: hub 1-0:1.0: USB hub found Jan 20 01:41:27.733170 kernel: hub 1-0:1.0: 4 ports detected Jan 20 01:41:27.747615 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 20 01:41:27.751623 kernel: hub 2-0:1.0: USB hub found Jan 20 01:41:27.755265 kernel: hub 2-0:1.0: 4 ports detected Jan 20 01:41:27.987675 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 20 01:41:28.128687 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 01:41:28.136391 kernel: usbcore: registered new interface driver usbhid Jan 20 01:41:28.136453 kernel: usbhid: USB HID core driver Jan 20 01:41:28.144295 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 20 01:41:28.144341 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 20 01:41:28.557674 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 01:41:28.558635 disk-uuid[558]: The operation has completed successfully. Jan 20 01:41:28.613544 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 01:41:28.613781 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 01:41:28.631855 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 20 01:41:28.644950 sh[587]: Success Jan 20 01:41:28.663634 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 20 01:41:28.721132 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 20 01:41:28.723757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 20 01:41:28.725435 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 20 01:41:28.751690 kernel: BTRFS info (device dm-0): first mount of filesystem ea39c6ab-04c2-4917-8268-943d4ecb2b5c Jan 20 01:41:28.751755 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:41:28.753815 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 20 01:41:28.756033 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 01:41:28.757702 kernel: BTRFS info (device dm-0): using free space tree Jan 20 01:41:28.769262 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 20 01:41:28.770765 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 01:41:28.775798 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 01:41:28.778146 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 01:41:28.799825 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:41:28.799887 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:41:28.799907 kernel: BTRFS info (device vda6): using free space tree Jan 20 01:41:28.804656 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 01:41:28.820137 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 20 01:41:28.821467 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:41:28.828392 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 01:41:28.836840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 01:41:28.914242 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:41:28.922867 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:41:28.976684 ignition[696]: Ignition 2.19.0 Jan 20 01:41:28.977716 ignition[696]: Stage: fetch-offline Jan 20 01:41:28.977801 ignition[696]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:28.977822 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:28.979983 systemd-networkd[770]: lo: Link UP Jan 20 01:41:28.977977 ignition[696]: parsed url from cmdline: "" Jan 20 01:41:28.979990 systemd-networkd[770]: lo: Gained carrier Jan 20 01:41:28.977983 ignition[696]: no config URL provided Jan 20 01:41:28.982010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:41:28.977993 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:41:28.982757 systemd-networkd[770]: Enumeration completed Jan 20 01:41:28.978009 ignition[696]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:41:28.983388 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:41:28.978032 ignition[696]: failed to fetch config: resource requires networking Jan 20 01:41:28.983393 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:41:28.978290 ignition[696]: Ignition finished successfully Jan 20 01:41:28.983895 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:41:28.985563 systemd-networkd[770]: eth0: Link UP Jan 20 01:41:28.985569 systemd-networkd[770]: eth0: Gained carrier Jan 20 01:41:28.985580 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:41:28.986955 systemd[1]: Reached target network.target - Network. Jan 20 01:41:28.996820 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 01:41:29.012724 systemd-networkd[770]: eth0: DHCPv4 address 10.230.30.54/30, gateway 10.230.30.53 acquired from 10.230.30.53 Jan 20 01:41:29.021080 ignition[778]: Ignition 2.19.0 Jan 20 01:41:29.021122 ignition[778]: Stage: fetch Jan 20 01:41:29.021453 ignition[778]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:29.021474 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:29.021669 ignition[778]: parsed url from cmdline: "" Jan 20 01:41:29.021676 ignition[778]: no config URL provided Jan 20 01:41:29.021706 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 01:41:29.021722 ignition[778]: no config at "/usr/lib/ignition/user.ign" Jan 20 01:41:29.021931 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 20 01:41:29.021981 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 20 01:41:29.022140 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 20 01:41:29.036367 ignition[778]: GET result: OK Jan 20 01:41:29.037017 ignition[778]: parsing config with SHA512: 0d1545795566743bfb393ea6c275f7419b025950ebb0ac7aa356b041f6513f6a38d42039b070e020a7d09aedfb9745cb75172667dd875fcda249b301c92b486d Jan 20 01:41:29.043670 unknown[778]: fetched base config from "system" Jan 20 01:41:29.044262 ignition[778]: fetch: fetch complete Jan 20 01:41:29.043697 unknown[778]: fetched base config from "system" Jan 20 01:41:29.044275 ignition[778]: fetch: fetch passed Jan 20 01:41:29.043708 unknown[778]: fetched user config from "openstack" Jan 20 01:41:29.044343 ignition[778]: Ignition finished successfully Jan 20 01:41:29.046238 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 01:41:29.062883 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 01:41:29.086690 ignition[786]: Ignition 2.19.0 Jan 20 01:41:29.086710 ignition[786]: Stage: kargs Jan 20 01:41:29.086936 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:29.086968 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:29.088126 ignition[786]: kargs: kargs passed Jan 20 01:41:29.091258 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 01:41:29.088209 ignition[786]: Ignition finished successfully Jan 20 01:41:29.097816 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 01:41:29.118546 ignition[793]: Ignition 2.19.0 Jan 20 01:41:29.118572 ignition[793]: Stage: disks Jan 20 01:41:29.120133 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:29.120157 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:29.121352 ignition[793]: disks: disks passed Jan 20 01:41:29.121427 ignition[793]: Ignition finished successfully Jan 20 01:41:29.124730 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 01:41:29.126892 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 01:41:29.127781 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 01:41:29.129471 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:41:29.131114 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:41:29.132543 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:41:29.140815 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 01:41:29.159395 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 20 01:41:29.162761 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 01:41:29.167733 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 01:41:29.300627 kernel: EXT4-fs (vda9): mounted filesystem 3f4cac35-b37d-4410-a45a-1329edafa0f9 r/w with ordered data mode. Quota mode: none. Jan 20 01:41:29.301885 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 01:41:29.303397 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 01:41:29.313767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:41:29.316550 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 01:41:29.318494 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 01:41:29.324842 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 20 01:41:29.329097 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Jan 20 01:41:29.333645 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:41:29.333697 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:41:29.333738 kernel: BTRFS info (device vda6): using free space tree Jan 20 01:41:29.337253 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 01:41:29.341734 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 01:41:29.339338 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:41:29.346406 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:41:29.347245 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 01:41:29.357819 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 01:41:29.426171 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 20 01:41:29.434642 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 20 01:41:29.444193 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 20 01:41:29.451454 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 20 01:41:29.555828 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 01:41:29.561735 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 01:41:29.564801 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 01:41:29.578688 kernel: BTRFS info (device vda6): last unmount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:41:29.605983 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 01:41:29.620121 ignition[926]: INFO : Ignition 2.19.0 Jan 20 01:41:29.622711 ignition[926]: INFO : Stage: mount Jan 20 01:41:29.622711 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:29.622711 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:29.625202 ignition[926]: INFO : mount: mount passed Jan 20 01:41:29.625202 ignition[926]: INFO : Ignition finished successfully Jan 20 01:41:29.626236 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 01:41:29.749673 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 01:41:30.487854 systemd-networkd[770]: eth0: Gained IPv6LL Jan 20 01:41:31.995271 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:878d:24:19ff:fee6:1e36/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:878d:24:19ff:fee6:1e36/64 assigned by NDisc. Jan 20 01:41:31.995289 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 20 01:41:36.497846 coreos-metadata[811]: Jan 20 01:41:36.497 WARN failed to locate config-drive, using the metadata service API instead Jan 20 01:41:36.522772 coreos-metadata[811]: Jan 20 01:41:36.522 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 01:41:36.537376 coreos-metadata[811]: Jan 20 01:41:36.537 INFO Fetch successful Jan 20 01:41:36.538351 coreos-metadata[811]: Jan 20 01:41:36.537 INFO wrote hostname srv-vpmg3.gb1.brightbox.com to /sysroot/etc/hostname Jan 20 01:41:36.540645 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 20 01:41:36.540844 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 20 01:41:36.556210 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 01:41:36.565339 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 01:41:36.591654 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jan 20 01:41:36.596893 kernel: BTRFS info (device vda6): first mount of filesystem 4d38e730-f67b-44a8-80fa-82ea4ebb2132 Jan 20 01:41:36.596959 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 20 01:41:36.598778 kernel: BTRFS info (device vda6): using free space tree Jan 20 01:41:36.604644 kernel: BTRFS info (device vda6): auto enabling async discard Jan 20 01:41:36.607352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 01:41:36.637089 ignition[962]: INFO : Ignition 2.19.0 Jan 20 01:41:36.637089 ignition[962]: INFO : Stage: files Jan 20 01:41:36.638907 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:36.638907 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:36.638907 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Jan 20 01:41:36.641770 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 01:41:36.641770 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 01:41:36.653215 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 01:41:36.654544 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 01:41:36.654544 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 01:41:36.653926 unknown[962]: wrote ssh authorized keys file for user: core Jan 20 01:41:36.659746 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 01:41:36.661096 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 20 01:41:36.898314 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 01:41:37.154320 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 01:41:37.160967 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:41:37.175569 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 20 01:41:37.551758 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 01:41:38.772087 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 20 01:41:38.772087 ignition[962]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 01:41:38.786900 ignition[962]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:41:38.789378 ignition[962]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 01:41:38.789378 ignition[962]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 01:41:38.791788 ignition[962]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 01:41:38.791788 ignition[962]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 01:41:38.791788 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:41:38.795492 ignition[962]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 01:41:38.795492 ignition[962]: INFO : files: files passed Jan 20 01:41:38.795492 ignition[962]: INFO : Ignition finished successfully Jan 20 01:41:38.797873 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 01:41:38.808995 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 01:41:38.816901 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 01:41:38.829239 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 01:41:38.829493 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 01:41:38.840254 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:41:38.840254 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:41:38.842605 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 01:41:38.844332 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:41:38.845976 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 01:41:38.853885 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 01:41:38.901425 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 01:41:38.901708 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 01:41:38.903732 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 01:41:38.905091 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 01:41:38.906839 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 01:41:38.916867 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 01:41:38.938249 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:41:38.945829 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 01:41:38.970392 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:41:38.971448 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:41:38.973201 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 01:41:38.974822 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 01:41:38.975141 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 01:41:38.976849 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 01:41:38.977965 systemd[1]: Stopped target basic.target - Basic System. Jan 20 01:41:38.979524 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 01:41:38.981023 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 01:41:38.982543 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 01:41:38.984268 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 01:41:38.985825 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 01:41:38.987522 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 01:41:38.989058 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 01:41:38.990647 systemd[1]: Stopped target swap.target - Swaps. Jan 20 01:41:38.992029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 01:41:38.992258 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 01:41:38.994277 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:41:38.996024 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:41:38.997590 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 01:41:38.997799 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:41:38.999092 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 01:41:38.999292 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 01:41:39.003019 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 01:41:39.003201 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 01:41:39.005279 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 01:41:39.005613 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 01:41:39.013880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 01:41:39.014773 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 01:41:39.015128 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:41:39.021660 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 01:41:39.022470 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 01:41:39.022743 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:41:39.029834 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 01:41:39.030046 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 01:41:39.038416 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 01:41:39.038636 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 01:41:39.056622 ignition[1015]: INFO : Ignition 2.19.0 Jan 20 01:41:39.056622 ignition[1015]: INFO : Stage: umount Jan 20 01:41:39.056622 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 01:41:39.056622 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 01:41:39.061642 ignition[1015]: INFO : umount: umount passed Jan 20 01:41:39.061642 ignition[1015]: INFO : Ignition finished successfully Jan 20 01:41:39.062006 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 01:41:39.062207 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 01:41:39.064892 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 01:41:39.065003 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 01:41:39.066066 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 01:41:39.066147 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 01:41:39.067764 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 01:41:39.067835 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 01:41:39.069862 systemd[1]: Stopped target network.target - Network. Jan 20 01:41:39.070831 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 01:41:39.070903 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 01:41:39.073024 systemd[1]: Stopped target paths.target - Path Units. Jan 20 01:41:39.074825 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 01:41:39.079784 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:41:39.080685 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 01:41:39.081406 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 01:41:39.083261 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 01:41:39.083397 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 01:41:39.084940 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 01:41:39.085007 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 01:41:39.086341 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 01:41:39.086439 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 01:41:39.087766 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 01:41:39.087851 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 01:41:39.089508 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 01:41:39.091734 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 01:41:39.092941 systemd-networkd[770]: eth0: DHCPv6 lease lost Jan 20 01:41:39.095981 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 01:41:39.098291 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 01:41:39.098483 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 01:41:39.100992 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 01:41:39.101139 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:41:39.109718 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 01:41:39.110481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 01:41:39.110556 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 01:41:39.112216 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:41:39.120459 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 01:41:39.120661 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 01:41:39.127336 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 01:41:39.128477 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:41:39.141345 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 01:41:39.141471 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 01:41:39.143387 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 01:41:39.143453 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:41:39.145042 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 01:41:39.145122 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 01:41:39.147396 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 01:41:39.147467 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 01:41:39.149097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 01:41:39.149181 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 01:41:39.160891 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 01:41:39.161735 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 01:41:39.161825 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:41:39.165149 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 01:41:39.165220 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 01:41:39.166361 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 01:41:39.166429 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:41:39.167250 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 01:41:39.167329 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:41:39.170715 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 01:41:39.170792 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:41:39.172483 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 01:41:39.172687 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 01:41:39.173910 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 01:41:39.174055 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 01:41:39.198739 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 01:41:39.198945 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 01:41:39.200753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 01:41:39.201953 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 01:41:39.202041 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 01:41:39.216877 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 01:41:39.228249 systemd[1]: Switching root. Jan 20 01:41:39.266220 systemd-journald[203]: Journal stopped Jan 20 01:41:40.686207 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 20 01:41:40.686334 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 01:41:40.686361 kernel: SELinux: policy capability open_perms=1 Jan 20 01:41:40.686388 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 01:41:40.686407 kernel: SELinux: policy capability always_check_network=0 Jan 20 01:41:40.686427 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 01:41:40.686446 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 01:41:40.686465 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 01:41:40.686483 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 01:41:40.686508 kernel: audit: type=1403 audit(1768873299.521:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 20 01:41:40.686566 systemd[1]: Successfully loaded SELinux policy in 52.748ms. Jan 20 01:41:40.686631 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.800ms. Jan 20 01:41:40.686656 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 20 01:41:40.686678 systemd[1]: Detected virtualization kvm. Jan 20 01:41:40.686698 systemd[1]: Detected architecture x86-64. Jan 20 01:41:40.686718 systemd[1]: Detected first boot. Jan 20 01:41:40.686753 systemd[1]: Hostname set to . Jan 20 01:41:40.686776 systemd[1]: Initializing machine ID from VM UUID. Jan 20 01:41:40.686808 zram_generator::config[1058]: No configuration found. Jan 20 01:41:40.686831 systemd[1]: Populated /etc with preset unit settings. Jan 20 01:41:40.686859 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 01:41:40.686880 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 01:41:40.686901 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 01:41:40.686922 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 01:41:40.686942 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 01:41:40.686962 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 01:41:40.686994 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 01:41:40.687023 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 01:41:40.687045 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 01:41:40.687066 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 01:41:40.687086 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 01:41:40.687113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 01:41:40.687147 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 01:41:40.687169 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 01:41:40.687189 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 01:41:40.687223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 01:41:40.687264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 01:41:40.687288 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 20 01:41:40.687308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 01:41:40.687329 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 01:41:40.687350 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 01:41:40.687384 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 01:41:40.687407 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 01:41:40.687427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 01:41:40.687454 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 01:41:40.687475 systemd[1]: Reached target slices.target - Slice Units. Jan 20 01:41:40.687495 systemd[1]: Reached target swap.target - Swaps. Jan 20 01:41:40.687516 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 01:41:40.687537 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 01:41:40.687558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 01:41:40.688092 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 01:41:40.688609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 01:41:40.688641 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 01:41:40.688662 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 01:41:40.688691 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 01:41:40.688736 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 01:41:40.688779 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:40.688801 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 01:41:40.688822 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 01:41:40.688842 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 01:41:40.688864 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 01:41:40.688890 systemd[1]: Reached target machines.target - Containers. Jan 20 01:41:40.688910 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 01:41:40.688931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:41:40.688967 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 01:41:40.688991 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 01:41:40.689011 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:41:40.689038 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:41:40.689059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:41:40.689080 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 01:41:40.689100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:41:40.689120 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 01:41:40.689141 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 01:41:40.689174 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 01:41:40.689196 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 01:41:40.689217 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 01:41:40.689238 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 01:41:40.689271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 01:41:40.689293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 01:41:40.689315 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 01:41:40.689367 systemd-journald[1151]: Collecting audit messages is disabled. Jan 20 01:41:40.689422 systemd-journald[1151]: Journal started Jan 20 01:41:40.689464 systemd-journald[1151]: Runtime Journal (/run/log/journal/163d71d3f9584ffe9c75876068c47709) is 4.7M, max 38.0M, 33.2M free. Jan 20 01:41:40.695434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 01:41:40.695507 kernel: loop: module loaded Jan 20 01:41:40.350583 systemd[1]: Queued start job for default target multi-user.target. Jan 20 01:41:40.369358 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 01:41:40.370062 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 01:41:40.699633 systemd[1]: verity-setup.service: Deactivated successfully. Jan 20 01:41:40.701649 systemd[1]: Stopped verity-setup.service. Jan 20 01:41:40.706647 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:40.715642 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 01:41:40.723671 kernel: fuse: init (API version 7.39) Jan 20 01:41:40.732081 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 01:41:40.732998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 01:41:40.734723 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 01:41:40.736670 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 01:41:40.737710 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 01:41:40.739180 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 01:41:40.747762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 01:41:40.749319 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 01:41:40.749533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 01:41:40.752184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:41:40.753172 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:41:40.757203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:41:40.757463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:41:40.759079 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 01:41:40.759317 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 01:41:40.761182 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:41:40.761659 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:41:40.764051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 01:41:40.765184 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 01:41:40.767403 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 01:41:40.774920 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 01:41:40.775614 kernel: ACPI: bus type drm_connector registered Jan 20 01:41:40.778789 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:41:40.779045 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:41:40.792417 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 01:41:40.801705 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 01:41:40.810694 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 01:41:40.812710 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 01:41:40.812767 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 01:41:40.816126 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 20 01:41:40.825846 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 01:41:40.831169 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 01:41:40.833072 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:41:40.837831 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 01:41:40.849832 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 01:41:40.850732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:41:40.855056 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 01:41:40.856848 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:41:40.862017 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 01:41:40.867824 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 01:41:40.881878 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 01:41:40.887311 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 01:41:40.890907 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 01:41:40.893120 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 01:41:40.923626 kernel: loop0: detected capacity change from 0 to 140768 Jan 20 01:41:40.934389 systemd-journald[1151]: Time spent on flushing to /var/log/journal/163d71d3f9584ffe9c75876068c47709 is 154.326ms for 1137 entries. Jan 20 01:41:40.934389 systemd-journald[1151]: System Journal (/var/log/journal/163d71d3f9584ffe9c75876068c47709) is 8.0M, max 584.8M, 576.8M free. Jan 20 01:41:41.131204 systemd-journald[1151]: Received client request to flush runtime journal. Jan 20 01:41:41.131288 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 01:41:41.133686 kernel: loop1: detected capacity change from 0 to 8 Jan 20 01:41:41.133723 kernel: loop2: detected capacity change from 0 to 224512 Jan 20 01:41:40.936716 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 01:41:40.939112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 01:41:40.954876 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 20 01:41:41.036613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 01:41:41.038892 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 20 01:41:41.061816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 01:41:41.082960 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 01:41:41.093901 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 01:41:41.143170 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 01:41:41.145830 kernel: loop3: detected capacity change from 0 to 142488 Jan 20 01:41:41.156480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 01:41:41.163647 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 20 01:41:41.175209 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 20 01:41:41.179647 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 20 01:41:41.196939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 01:41:41.225746 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 20 01:41:41.251670 kernel: loop4: detected capacity change from 0 to 140768 Jan 20 01:41:41.278664 kernel: loop5: detected capacity change from 0 to 8 Jan 20 01:41:41.289750 kernel: loop6: detected capacity change from 0 to 224512 Jan 20 01:41:41.321986 kernel: loop7: detected capacity change from 0 to 142488 Jan 20 01:41:41.367140 (sd-merge)[1216]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 20 01:41:41.368048 (sd-merge)[1216]: Merged extensions into '/usr'. Jan 20 01:41:41.381831 systemd[1]: Reloading requested from client PID 1191 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 01:41:41.381856 systemd[1]: Reloading... Jan 20 01:41:41.479634 zram_generator::config[1242]: No configuration found. Jan 20 01:41:41.682073 ldconfig[1186]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 01:41:41.793392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:41.863284 systemd[1]: Reloading finished in 480 ms. Jan 20 01:41:41.902405 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 01:41:41.903914 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 01:41:41.916908 systemd[1]: Starting ensure-sysext.service... Jan 20 01:41:41.929554 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 01:41:41.949092 systemd[1]: Reloading requested from client PID 1298 ('systemctl') (unit ensure-sysext.service)... Jan 20 01:41:41.949120 systemd[1]: Reloading... Jan 20 01:41:41.967580 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 01:41:41.969186 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 20 01:41:41.970711 systemd-tmpfiles[1299]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 20 01:41:41.971178 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 20 01:41:41.971337 systemd-tmpfiles[1299]: ACLs are not supported, ignoring. Jan 20 01:41:41.976721 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:41:41.976739 systemd-tmpfiles[1299]: Skipping /boot Jan 20 01:41:41.996116 systemd-tmpfiles[1299]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 01:41:41.996137 systemd-tmpfiles[1299]: Skipping /boot Jan 20 01:41:42.043632 zram_generator::config[1326]: No configuration found. Jan 20 01:41:42.232761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:41:42.303606 systemd[1]: Reloading finished in 353 ms. Jan 20 01:41:42.332142 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 01:41:42.343384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 01:41:42.353896 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:41:42.358821 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 01:41:42.362831 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 01:41:42.368775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 01:41:42.371473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 01:41:42.381380 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 01:41:42.391957 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:42.392311 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:41:42.400933 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 01:41:42.408910 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 01:41:42.414930 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 01:41:42.415896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:41:42.416071 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:42.421263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:42.421535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:41:42.421784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:41:42.430977 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 01:41:42.431807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:42.437428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 01:41:42.438592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 01:41:42.442176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:42.442568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 01:41:42.452926 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 01:41:42.455885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 01:41:42.456087 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 20 01:41:42.460664 systemd[1]: Finished ensure-sysext.service. Jan 20 01:41:42.461889 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 01:41:42.488008 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 01:41:42.493938 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 01:41:42.496797 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 01:41:42.499396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 01:41:42.499637 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 01:41:42.502342 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 01:41:42.502600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 01:41:42.506236 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 01:41:42.506441 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 01:41:42.514234 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 01:41:42.520354 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 01:41:42.520448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 01:41:42.520485 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 01:41:42.551568 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 01:41:42.556664 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Jan 20 01:41:42.563902 augenrules[1421]: No rules Jan 20 01:41:42.564335 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:41:42.573278 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 01:41:42.606749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 01:41:42.618850 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 01:41:42.735131 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 20 01:41:42.751639 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1442) Jan 20 01:41:42.778483 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 01:41:42.779847 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 01:41:42.795272 systemd-resolved[1388]: Positive Trust Anchors: Jan 20 01:41:42.796207 systemd-resolved[1388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 01:41:42.796258 systemd-resolved[1388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 01:41:42.806338 systemd-resolved[1388]: Using system hostname 'srv-vpmg3.gb1.brightbox.com'. Jan 20 01:41:42.813070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 01:41:42.814300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 01:41:42.836163 systemd-networkd[1434]: lo: Link UP Jan 20 01:41:42.836186 systemd-networkd[1434]: lo: Gained carrier Jan 20 01:41:42.839491 systemd-networkd[1434]: Enumeration completed Jan 20 01:41:42.839777 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 01:41:42.840696 systemd[1]: Reached target network.target - Network. Jan 20 01:41:42.849851 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 01:41:42.902952 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:41:42.903124 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 01:41:42.907738 systemd-networkd[1434]: eth0: Link UP Jan 20 01:41:42.908207 systemd-networkd[1434]: eth0: Gained carrier Jan 20 01:41:42.908315 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 20 01:41:42.922750 systemd-networkd[1434]: eth0: DHCPv4 address 10.230.30.54/30, gateway 10.230.30.53 acquired from 10.230.30.53 Jan 20 01:41:42.923768 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 20 01:41:42.924888 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 20 01:41:42.945000 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 01:41:42.953853 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 01:41:42.959897 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 20 01:41:42.978705 kernel: ACPI: button: Power Button [PWRF] Jan 20 01:41:42.990130 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 01:41:42.990690 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 01:41:43.018662 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 20 01:41:43.023621 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 20 01:41:43.025901 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 20 01:41:43.046685 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 20 01:41:43.089818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 01:41:43.298773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 01:41:43.316045 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 20 01:41:43.322868 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 20 01:41:43.351715 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:41:43.382130 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 20 01:41:43.384054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 01:41:43.384929 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 01:41:43.385884 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 01:41:43.386974 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 01:41:43.388244 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 01:41:43.389195 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 01:41:43.390005 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 01:41:43.390833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 01:41:43.390887 systemd[1]: Reached target paths.target - Path Units. Jan 20 01:41:43.391582 systemd[1]: Reached target timers.target - Timer Units. Jan 20 01:41:43.393242 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 01:41:43.396054 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 01:41:43.403010 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 01:41:43.405744 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 20 01:41:43.407367 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 01:41:43.408263 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 01:41:43.409002 systemd[1]: Reached target basic.target - Basic System. Jan 20 01:41:43.409750 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:41:43.409790 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 01:41:43.416744 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 01:41:43.420391 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 01:41:43.425833 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 01:41:43.428510 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 20 01:41:43.431767 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 01:41:43.442765 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 01:41:43.443559 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 01:41:43.446794 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 01:41:43.450963 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 01:41:43.458778 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 01:41:43.461920 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 01:41:43.467656 jq[1479]: false Jan 20 01:41:43.467075 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 01:41:43.469272 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 01:41:43.469931 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 01:41:43.471026 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 01:41:43.475773 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 01:41:43.483149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 01:41:43.483435 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 01:41:43.531224 jq[1489]: true Jan 20 01:41:43.549003 extend-filesystems[1480]: Found loop4 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found loop5 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found loop6 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found loop7 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda1 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda2 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda3 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found usr Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda4 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda6 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda7 Jan 20 01:41:43.549003 extend-filesystems[1480]: Found vda9 Jan 20 01:41:43.549003 extend-filesystems[1480]: Checking size of /dev/vda9 Jan 20 01:41:43.661662 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 20 01:41:43.542324 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 01:41:43.662640 extend-filesystems[1480]: Resized partition /dev/vda9 Jan 20 01:41:43.553833 dbus-daemon[1478]: [system] SELinux support is enabled Jan 20 01:41:43.677108 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1448) Jan 20 01:41:43.542667 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 01:41:43.677455 extend-filesystems[1521]: resize2fs 1.47.1 (20-May-2024) Jan 20 01:41:43.561023 dbus-daemon[1478]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1434 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 20 01:41:43.687416 update_engine[1488]: I20260120 01:41:43.640797 1488 main.cc:92] Flatcar Update Engine starting Jan 20 01:41:43.687416 update_engine[1488]: I20260120 01:41:43.656935 1488 update_check_scheduler.cc:74] Next update check in 3m40s Jan 20 01:41:43.555512 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 01:41:43.565984 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 20 01:41:43.688415 jq[1497]: true Jan 20 01:41:43.564110 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 01:41:43.564175 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 01:41:43.572067 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 01:41:43.572096 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 01:41:43.585502 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 20 01:41:43.596243 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 20 01:41:43.606120 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 20 01:41:43.656779 systemd[1]: Started update-engine.service - Update Engine. Jan 20 01:41:43.681100 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 01:41:43.694893 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 01:41:43.701070 tar[1500]: linux-amd64/LICENSE Jan 20 01:41:43.701070 tar[1500]: linux-amd64/helm Jan 20 01:41:43.727282 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 01:41:43.728842 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 01:41:43.845723 bash[1529]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:41:43.852550 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 01:41:43.876940 systemd[1]: Starting sshkeys.service... Jan 20 01:41:43.887987 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 20 01:41:43.895771 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 01:41:43.905277 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1506 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 20 01:41:43.905427 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 01:41:43.913477 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 20 01:41:43.925035 systemd[1]: Starting polkit.service - Authorization Manager... Jan 20 01:41:44.005703 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Jan 20 01:41:44.006106 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 20 01:41:44.010069 systemd-logind[1487]: New seat seat0. Jan 20 01:41:44.024041 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 01:41:44.035973 polkitd[1540]: Started polkitd version 121 Jan 20 01:41:44.081337 polkitd[1540]: Loading rules from directory /etc/polkit-1/rules.d Jan 20 01:41:44.088424 polkitd[1540]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 20 01:41:44.104540 polkitd[1540]: Finished loading, compiling and executing 2 rules Jan 20 01:41:44.109178 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 20 01:41:44.109618 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 20 01:41:44.109598 systemd[1]: Started polkit.service - Authorization Manager. Jan 20 01:41:44.110074 polkitd[1540]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 20 01:41:44.129154 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 01:41:44.132406 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 01:41:44.132406 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 20 01:41:44.132406 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 20 01:41:44.138470 extend-filesystems[1480]: Resized filesystem in /dev/vda9 Jan 20 01:41:44.137080 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 01:41:44.139030 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 01:41:44.144688 systemd-hostnamed[1506]: Hostname set to (static) Jan 20 01:41:44.148743 sshd_keygen[1535]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 01:41:44.189273 containerd[1496]: time="2026-01-20T01:41:44.189038924Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 20 01:41:44.220325 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 01:41:44.236627 containerd[1496]: time="2026-01-20T01:41:44.235588594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.238842 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.244698011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.244744888Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.244780922Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.245167340Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.245201091Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.245311991Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.245334705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.245609675Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:41:44.245626 containerd[1496]: time="2026-01-20T01:41:44.245638743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.246094 containerd[1496]: time="2026-01-20T01:41:44.245660268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:41:44.246094 containerd[1496]: time="2026-01-20T01:41:44.245677200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.246094 containerd[1496]: time="2026-01-20T01:41:44.245805442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.249670 containerd[1496]: time="2026-01-20T01:41:44.246263362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 20 01:41:44.249670 containerd[1496]: time="2026-01-20T01:41:44.246402192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 20 01:41:44.249670 containerd[1496]: time="2026-01-20T01:41:44.246436398Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 20 01:41:44.249670 containerd[1496]: time="2026-01-20T01:41:44.246604773Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 20 01:41:44.249670 containerd[1496]: time="2026-01-20T01:41:44.246763463Z" level=info msg="metadata content store policy set" policy=shared Jan 20 01:41:44.249030 systemd[1]: Started sshd@0-10.230.30.54:22-20.161.92.111:41626.service - OpenSSH per-connection server daemon (20.161.92.111:41626). Jan 20 01:41:44.256171 containerd[1496]: time="2026-01-20T01:41:44.255172392Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 20 01:41:44.256171 containerd[1496]: time="2026-01-20T01:41:44.255271981Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 20 01:41:44.256171 containerd[1496]: time="2026-01-20T01:41:44.255306380Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 20 01:41:44.256171 containerd[1496]: time="2026-01-20T01:41:44.255389192Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 20 01:41:44.256171 containerd[1496]: time="2026-01-20T01:41:44.255448500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 20 01:41:44.256171 containerd[1496]: time="2026-01-20T01:41:44.255690870Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 20 01:41:44.256736 containerd[1496]: time="2026-01-20T01:41:44.256567500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 20 01:41:44.256988 containerd[1496]: time="2026-01-20T01:41:44.256963758Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 20 01:41:44.257181 containerd[1496]: time="2026-01-20T01:41:44.257144014Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 20 01:41:44.257430 containerd[1496]: time="2026-01-20T01:41:44.257280486Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 20 01:41:44.257430 containerd[1496]: time="2026-01-20T01:41:44.257312575Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.257430 containerd[1496]: time="2026-01-20T01:41:44.257355604Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.257906 containerd[1496]: time="2026-01-20T01:41:44.257392870Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.257906 containerd[1496]: time="2026-01-20T01:41:44.257764680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.257906 containerd[1496]: time="2026-01-20T01:41:44.257788571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.257906 containerd[1496]: time="2026-01-20T01:41:44.257838098Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.258665 containerd[1496]: time="2026-01-20T01:41:44.257867354Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.258665 containerd[1496]: time="2026-01-20T01:41:44.258278262Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 20 01:41:44.258665 containerd[1496]: time="2026-01-20T01:41:44.258451927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.258665 containerd[1496]: time="2026-01-20T01:41:44.258487705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.258665 containerd[1496]: time="2026-01-20T01:41:44.258542241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259029804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259073615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259114498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259156397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259223049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259257820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259310610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.259363 containerd[1496]: time="2026-01-20T01:41:44.259330154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.260051 containerd[1496]: time="2026-01-20T01:41:44.259703673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.260051 containerd[1496]: time="2026-01-20T01:41:44.259737629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.260051 containerd[1496]: time="2026-01-20T01:41:44.259820934Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 20 01:41:44.260051 containerd[1496]: time="2026-01-20T01:41:44.259897882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.260051 containerd[1496]: time="2026-01-20T01:41:44.259921652Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.260051 containerd[1496]: time="2026-01-20T01:41:44.259943480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.260365472Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.260721314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.260748335Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.260809748Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.260859920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.260896380Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 20 01:41:44.261108 containerd[1496]: time="2026-01-20T01:41:44.261043804Z" level=info msg="NRI interface is disabled by configuration." Jan 20 01:41:44.261865 containerd[1496]: time="2026-01-20T01:41:44.261081359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 20 01:41:44.262708 containerd[1496]: time="2026-01-20T01:41:44.262491576Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 20 01:41:44.263791 containerd[1496]: time="2026-01-20T01:41:44.262666745Z" level=info msg="Connect containerd service" Jan 20 01:41:44.263791 containerd[1496]: time="2026-01-20T01:41:44.263330098Z" level=info msg="using legacy CRI server" Jan 20 01:41:44.263791 containerd[1496]: time="2026-01-20T01:41:44.263360376Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 01:41:44.263791 containerd[1496]: time="2026-01-20T01:41:44.263658476Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.265550672Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.265769803Z" level=info msg="Start subscribing containerd event" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.265886473Z" level=info msg="Start recovering state" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.266003545Z" level=info msg="Start event monitor" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.266037850Z" level=info msg="Start snapshots syncer" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.266059245Z" level=info msg="Start cni network conf syncer for default" Jan 20 01:41:44.266318 containerd[1496]: time="2026-01-20T01:41:44.266072951Z" level=info msg="Start streaming server" Jan 20 01:41:44.268364 containerd[1496]: time="2026-01-20T01:41:44.267809331Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 01:41:44.268364 containerd[1496]: time="2026-01-20T01:41:44.267897638Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 01:41:44.268364 containerd[1496]: time="2026-01-20T01:41:44.268065473Z" level=info msg="containerd successfully booted in 0.080715s" Jan 20 01:41:44.268179 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 01:41:44.286308 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 01:41:44.286683 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 01:41:44.298267 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 01:41:44.333443 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 01:41:44.343227 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 01:41:44.354157 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 20 01:41:44.355308 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 01:41:44.696172 systemd-networkd[1434]: eth0: Gained IPv6LL Jan 20 01:41:44.697498 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 20 01:41:44.701494 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 01:41:44.703463 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 01:41:44.712888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:44.724499 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 01:41:44.735486 tar[1500]: linux-amd64/README.md Jan 20 01:41:44.763965 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 01:41:44.770901 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 01:41:44.859496 sshd[1575]: Accepted publickey for core from 20.161.92.111 port 41626 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:41:44.862681 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:44.884081 systemd-logind[1487]: New session 1 of user core. Jan 20 01:41:44.886494 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 01:41:44.896023 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 01:41:44.921276 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 01:41:44.933371 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 01:41:44.941449 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 20 01:41:45.093114 systemd[1602]: Queued start job for default target default.target. Jan 20 01:41:45.103117 systemd[1602]: Created slice app.slice - User Application Slice. Jan 20 01:41:45.103384 systemd[1602]: Reached target paths.target - Paths. Jan 20 01:41:45.103533 systemd[1602]: Reached target timers.target - Timers. Jan 20 01:41:45.107756 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 01:41:45.124689 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 01:41:45.125042 systemd[1602]: Reached target sockets.target - Sockets. Jan 20 01:41:45.125446 systemd[1602]: Reached target basic.target - Basic System. Jan 20 01:41:45.125523 systemd[1602]: Reached target default.target - Main User Target. Jan 20 01:41:45.125615 systemd[1602]: Startup finished in 173ms. Jan 20 01:41:45.126194 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 01:41:45.137049 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 01:41:45.565141 systemd[1]: Started sshd@1-10.230.30.54:22-20.161.92.111:36736.service - OpenSSH per-connection server daemon (20.161.92.111:36736). Jan 20 01:41:45.797487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:45.805166 (kubelet)[1621]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:46.128204 sshd[1614]: Accepted publickey for core from 20.161.92.111 port 36736 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:41:46.130563 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:46.141685 systemd-logind[1487]: New session 2 of user core. Jan 20 01:41:46.147976 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 20 01:41:46.202397 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 20 01:41:46.203677 systemd-networkd[1434]: eth0: Ignoring DHCPv6 address 2a02:1348:179:878d:24:19ff:fee6:1e36/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:878d:24:19ff:fee6:1e36/64 assigned by NDisc. Jan 20 01:41:46.203688 systemd-networkd[1434]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 20 01:41:46.433870 kubelet[1621]: E0120 01:41:46.432897 1621 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:46.435820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:46.436141 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:46.437642 systemd[1]: kubelet.service: Consumed 1.115s CPU time. Jan 20 01:41:46.539199 sshd[1614]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:46.542987 systemd[1]: sshd@1-10.230.30.54:22-20.161.92.111:36736.service: Deactivated successfully. Jan 20 01:41:46.545686 systemd[1]: session-2.scope: Deactivated successfully. Jan 20 01:41:46.547872 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Jan 20 01:41:46.549464 systemd-logind[1487]: Removed session 2. Jan 20 01:41:46.648167 systemd[1]: Started sshd@2-10.230.30.54:22-20.161.92.111:36742.service - OpenSSH per-connection server daemon (20.161.92.111:36742). Jan 20 01:41:47.212493 sshd[1635]: Accepted publickey for core from 20.161.92.111 port 36742 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:41:47.214682 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:47.221161 systemd-logind[1487]: New session 3 of user core. Jan 20 01:41:47.231860 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 01:41:47.618990 sshd[1635]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:47.624658 systemd[1]: sshd@2-10.230.30.54:22-20.161.92.111:36742.service: Deactivated successfully. Jan 20 01:41:47.627931 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 01:41:47.629537 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Jan 20 01:41:47.631104 systemd-logind[1487]: Removed session 3. Jan 20 01:41:48.088525 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 20 01:41:49.414836 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 20 01:41:49.429274 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 20 01:41:49.430186 systemd-logind[1487]: New session 4 of user core. Jan 20 01:41:49.438900 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 01:41:49.443938 systemd-logind[1487]: New session 5 of user core. Jan 20 01:41:49.449909 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 01:41:50.661586 coreos-metadata[1477]: Jan 20 01:41:50.661 WARN failed to locate config-drive, using the metadata service API instead Jan 20 01:41:50.689184 coreos-metadata[1477]: Jan 20 01:41:50.689 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 20 01:41:50.696942 coreos-metadata[1477]: Jan 20 01:41:50.696 INFO Fetch failed with 404: resource not found Jan 20 01:41:50.697025 coreos-metadata[1477]: Jan 20 01:41:50.696 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 01:41:50.697905 coreos-metadata[1477]: Jan 20 01:41:50.697 INFO Fetch successful Jan 20 01:41:50.698204 coreos-metadata[1477]: Jan 20 01:41:50.698 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 20 01:41:50.711381 coreos-metadata[1477]: Jan 20 01:41:50.711 INFO Fetch successful Jan 20 01:41:50.711557 coreos-metadata[1477]: Jan 20 01:41:50.711 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 20 01:41:50.727554 coreos-metadata[1477]: Jan 20 01:41:50.727 INFO Fetch successful Jan 20 01:41:50.727734 coreos-metadata[1477]: Jan 20 01:41:50.727 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 20 01:41:50.743957 coreos-metadata[1477]: Jan 20 01:41:50.743 INFO Fetch successful Jan 20 01:41:50.744135 coreos-metadata[1477]: Jan 20 01:41:50.744 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 20 01:41:50.760761 coreos-metadata[1477]: Jan 20 01:41:50.760 INFO Fetch successful Jan 20 01:41:50.796037 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 01:41:50.797582 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 01:41:51.190108 coreos-metadata[1539]: Jan 20 01:41:51.190 WARN failed to locate config-drive, using the metadata service API instead Jan 20 01:41:51.213740 coreos-metadata[1539]: Jan 20 01:41:51.213 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 20 01:41:51.252022 coreos-metadata[1539]: Jan 20 01:41:51.251 INFO Fetch successful Jan 20 01:41:51.252249 coreos-metadata[1539]: Jan 20 01:41:51.252 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 01:41:51.281440 coreos-metadata[1539]: Jan 20 01:41:51.281 INFO Fetch successful Jan 20 01:41:51.283829 unknown[1539]: wrote ssh authorized keys file for user: core Jan 20 01:41:51.316273 update-ssh-keys[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 20 01:41:51.317238 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 01:41:51.320676 systemd[1]: Finished sshkeys.service. Jan 20 01:41:51.324809 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 01:41:51.325289 systemd[1]: Startup finished in 1.462s (kernel) + 13.760s (initrd) + 11.855s (userspace) = 27.077s. Jan 20 01:41:56.512981 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 01:41:56.521888 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:41:56.752530 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:41:56.765126 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:41:56.849189 kubelet[1689]: E0120 01:41:56.848983 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:41:56.853706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:41:56.853981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:41:57.721984 systemd[1]: Started sshd@3-10.230.30.54:22-20.161.92.111:55114.service - OpenSSH per-connection server daemon (20.161.92.111:55114). Jan 20 01:41:58.303497 sshd[1697]: Accepted publickey for core from 20.161.92.111 port 55114 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:41:58.305884 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:58.313113 systemd-logind[1487]: New session 6 of user core. Jan 20 01:41:58.324859 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 01:41:58.712489 sshd[1697]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:58.718307 systemd[1]: sshd@3-10.230.30.54:22-20.161.92.111:55114.service: Deactivated successfully. Jan 20 01:41:58.721586 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 01:41:58.723734 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Jan 20 01:41:58.725388 systemd-logind[1487]: Removed session 6. Jan 20 01:41:58.815968 systemd[1]: Started sshd@4-10.230.30.54:22-20.161.92.111:55122.service - OpenSSH per-connection server daemon (20.161.92.111:55122). Jan 20 01:41:59.383363 sshd[1704]: Accepted publickey for core from 20.161.92.111 port 55122 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:41:59.385480 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:41:59.393577 systemd-logind[1487]: New session 7 of user core. Jan 20 01:41:59.403945 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 01:41:59.783902 sshd[1704]: pam_unix(sshd:session): session closed for user core Jan 20 01:41:59.789515 systemd[1]: sshd@4-10.230.30.54:22-20.161.92.111:55122.service: Deactivated successfully. Jan 20 01:41:59.791900 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 01:41:59.792941 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Jan 20 01:41:59.794262 systemd-logind[1487]: Removed session 7. Jan 20 01:41:59.888014 systemd[1]: Started sshd@5-10.230.30.54:22-20.161.92.111:55130.service - OpenSSH per-connection server daemon (20.161.92.111:55130). Jan 20 01:42:00.368043 systemd[1]: Started sshd@6-10.230.30.54:22-164.92.217.44:40834.service - OpenSSH per-connection server daemon (164.92.217.44:40834). Jan 20 01:42:00.469350 sshd[1711]: Accepted publickey for core from 20.161.92.111 port 55130 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:42:00.472397 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:00.480692 systemd-logind[1487]: New session 8 of user core. Jan 20 01:42:00.497193 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 01:42:00.564216 sshd[1714]: Invalid user oracle from 164.92.217.44 port 40834 Jan 20 01:42:00.593794 sshd[1714]: Connection closed by invalid user oracle 164.92.217.44 port 40834 [preauth] Jan 20 01:42:00.596950 systemd[1]: sshd@6-10.230.30.54:22-164.92.217.44:40834.service: Deactivated successfully. Jan 20 01:42:00.875641 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:00.880626 systemd[1]: sshd@5-10.230.30.54:22-20.161.92.111:55130.service: Deactivated successfully. Jan 20 01:42:00.883123 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 01:42:00.885233 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Jan 20 01:42:00.887035 systemd-logind[1487]: Removed session 8. Jan 20 01:42:00.983479 systemd[1]: Started sshd@7-10.230.30.54:22-20.161.92.111:55142.service - OpenSSH per-connection server daemon (20.161.92.111:55142). Jan 20 01:42:01.543785 sshd[1723]: Accepted publickey for core from 20.161.92.111 port 55142 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:42:01.546502 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:01.554689 systemd-logind[1487]: New session 9 of user core. Jan 20 01:42:01.561925 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 01:42:01.873146 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 01:42:01.873849 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:42:01.892576 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 20 01:42:01.983252 sshd[1723]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:01.989208 systemd[1]: sshd@7-10.230.30.54:22-20.161.92.111:55142.service: Deactivated successfully. Jan 20 01:42:01.991972 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 01:42:01.993045 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Jan 20 01:42:01.994760 systemd-logind[1487]: Removed session 9. Jan 20 01:42:02.093040 systemd[1]: Started sshd@8-10.230.30.54:22-20.161.92.111:43660.service - OpenSSH per-connection server daemon (20.161.92.111:43660). Jan 20 01:42:02.656342 sshd[1731]: Accepted publickey for core from 20.161.92.111 port 43660 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:42:02.659293 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:02.668843 systemd-logind[1487]: New session 10 of user core. Jan 20 01:42:02.674916 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 01:42:02.975060 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 01:42:02.975915 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:42:02.981864 sudo[1735]: pam_unix(sudo:session): session closed for user root Jan 20 01:42:02.991692 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 20 01:42:02.992179 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:42:03.020140 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 20 01:42:03.022422 auditctl[1738]: No rules Jan 20 01:42:03.022981 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 01:42:03.023287 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 20 01:42:03.027188 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 20 01:42:03.085342 augenrules[1756]: No rules Jan 20 01:42:03.087253 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 20 01:42:03.089103 sudo[1734]: pam_unix(sudo:session): session closed for user root Jan 20 01:42:03.180437 sshd[1731]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:03.186814 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Jan 20 01:42:03.187585 systemd[1]: sshd@8-10.230.30.54:22-20.161.92.111:43660.service: Deactivated successfully. Jan 20 01:42:03.190075 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 01:42:03.191495 systemd-logind[1487]: Removed session 10. Jan 20 01:42:03.289326 systemd[1]: Started sshd@9-10.230.30.54:22-20.161.92.111:43670.service - OpenSSH per-connection server daemon (20.161.92.111:43670). Jan 20 01:42:03.848058 sshd[1764]: Accepted publickey for core from 20.161.92.111 port 43670 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:42:03.850281 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:42:03.857725 systemd-logind[1487]: New session 11 of user core. Jan 20 01:42:03.864820 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 01:42:04.164734 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 01:42:04.165235 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 01:42:04.637006 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 01:42:04.649266 (dockerd)[1782]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 01:42:05.089248 dockerd[1782]: time="2026-01-20T01:42:05.089134985Z" level=info msg="Starting up" Jan 20 01:42:05.235746 systemd[1]: var-lib-docker-metacopy\x2dcheck57866091-merged.mount: Deactivated successfully. Jan 20 01:42:05.255858 dockerd[1782]: time="2026-01-20T01:42:05.255782875Z" level=info msg="Loading containers: start." Jan 20 01:42:05.415644 kernel: Initializing XFRM netlink socket Jan 20 01:42:05.453749 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 20 01:42:05.524832 systemd-networkd[1434]: docker0: Link UP Jan 20 01:42:05.553625 dockerd[1782]: time="2026-01-20T01:42:05.553550520Z" level=info msg="Loading containers: done." Jan 20 01:42:05.580190 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1634313629-merged.mount: Deactivated successfully. Jan 20 01:42:05.582971 dockerd[1782]: time="2026-01-20T01:42:05.582363815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 01:42:05.582971 dockerd[1782]: time="2026-01-20T01:42:05.582923121Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 20 01:42:05.583221 dockerd[1782]: time="2026-01-20T01:42:05.583181750Z" level=info msg="Daemon has completed initialization" Jan 20 01:42:06.365736 systemd-resolved[1388]: Clock change detected. Flushing caches. Jan 20 01:42:06.366177 systemd-timesyncd[1406]: Contacted time server [2a02:6b67:d551:8f04::]:123 (2.flatcar.pool.ntp.org). Jan 20 01:42:06.366299 systemd-timesyncd[1406]: Initial clock synchronization to Tue 2026-01-20 01:42:06.365580 UTC. Jan 20 01:42:06.399379 dockerd[1782]: time="2026-01-20T01:42:06.399183597Z" level=info msg="API listen on /run/docker.sock" Jan 20 01:42:06.401370 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 01:42:07.689392 containerd[1496]: time="2026-01-20T01:42:07.689278221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 01:42:07.788105 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 01:42:07.800172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:08.003534 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:08.011747 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:08.097385 kubelet[1932]: E0120 01:42:08.097264 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:08.099976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:08.100228 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:08.700162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3054819197.mount: Deactivated successfully. Jan 20 01:42:12.337825 containerd[1496]: time="2026-01-20T01:42:12.337679328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:12.339629 containerd[1496]: time="2026-01-20T01:42:12.339377855Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 20 01:42:12.340859 containerd[1496]: time="2026-01-20T01:42:12.340263795Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:12.344560 containerd[1496]: time="2026-01-20T01:42:12.344513615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:12.346310 containerd[1496]: time="2026-01-20T01:42:12.346266218Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 4.656874498s" Jan 20 01:42:12.346387 containerd[1496]: time="2026-01-20T01:42:12.346329890Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 20 01:42:12.355057 containerd[1496]: time="2026-01-20T01:42:12.354990133Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 01:42:15.452674 containerd[1496]: time="2026-01-20T01:42:15.450188948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:15.452674 containerd[1496]: time="2026-01-20T01:42:15.454977598Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 20 01:42:15.464985 containerd[1496]: time="2026-01-20T01:42:15.458890819Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:15.464985 containerd[1496]: time="2026-01-20T01:42:15.463814847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:15.464985 containerd[1496]: time="2026-01-20T01:42:15.464951921Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.109885186s" Jan 20 01:42:15.465196 containerd[1496]: time="2026-01-20T01:42:15.465052917Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 20 01:42:15.473213 containerd[1496]: time="2026-01-20T01:42:15.472286639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 01:42:17.008693 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 20 01:42:17.539997 containerd[1496]: time="2026-01-20T01:42:17.539925947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:17.541730 containerd[1496]: time="2026-01-20T01:42:17.541590648Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 20 01:42:17.542871 containerd[1496]: time="2026-01-20T01:42:17.542679861Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:17.547553 containerd[1496]: time="2026-01-20T01:42:17.547507458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:17.549594 containerd[1496]: time="2026-01-20T01:42:17.549539011Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.077182032s" Jan 20 01:42:17.549915 containerd[1496]: time="2026-01-20T01:42:17.549726737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 20 01:42:17.551778 containerd[1496]: time="2026-01-20T01:42:17.551726885Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 01:42:18.288965 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 01:42:18.305135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:18.828064 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:18.830092 (kubelet)[2020]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:18.924766 kubelet[2020]: E0120 01:42:18.924673 2020 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:18.929327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:18.929583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:19.431681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033853664.mount: Deactivated successfully. Jan 20 01:42:20.209543 containerd[1496]: time="2026-01-20T01:42:20.209464638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:20.211064 containerd[1496]: time="2026-01-20T01:42:20.210842810Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 20 01:42:20.212865 containerd[1496]: time="2026-01-20T01:42:20.211860152Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:20.214875 containerd[1496]: time="2026-01-20T01:42:20.214603879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:20.215863 containerd[1496]: time="2026-01-20T01:42:20.215711102Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.663937346s" Jan 20 01:42:20.215863 containerd[1496]: time="2026-01-20T01:42:20.215758639Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 20 01:42:20.217809 containerd[1496]: time="2026-01-20T01:42:20.217608653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 01:42:20.870473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179870695.mount: Deactivated successfully. Jan 20 01:42:22.148627 containerd[1496]: time="2026-01-20T01:42:22.148468162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:22.150649 containerd[1496]: time="2026-01-20T01:42:22.150578301Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 20 01:42:22.151398 containerd[1496]: time="2026-01-20T01:42:22.151357182Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:22.157117 containerd[1496]: time="2026-01-20T01:42:22.157036961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:22.159845 containerd[1496]: time="2026-01-20T01:42:22.158755692Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.941080264s" Jan 20 01:42:22.159845 containerd[1496]: time="2026-01-20T01:42:22.158851734Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 20 01:42:22.161151 containerd[1496]: time="2026-01-20T01:42:22.161110395Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 01:42:22.796731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1489884153.mount: Deactivated successfully. Jan 20 01:42:22.803371 containerd[1496]: time="2026-01-20T01:42:22.802929198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:22.804716 containerd[1496]: time="2026-01-20T01:42:22.804360583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 20 01:42:22.806062 containerd[1496]: time="2026-01-20T01:42:22.805574965Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:22.808609 containerd[1496]: time="2026-01-20T01:42:22.808565482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:22.810160 containerd[1496]: time="2026-01-20T01:42:22.810096331Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 648.919421ms" Jan 20 01:42:22.810327 containerd[1496]: time="2026-01-20T01:42:22.810297038Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 20 01:42:22.811595 containerd[1496]: time="2026-01-20T01:42:22.811543666Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 01:42:23.602512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3503794290.mount: Deactivated successfully. Jan 20 01:42:27.588216 containerd[1496]: time="2026-01-20T01:42:27.588035363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:27.590845 containerd[1496]: time="2026-01-20T01:42:27.590537266Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 20 01:42:27.591670 containerd[1496]: time="2026-01-20T01:42:27.591627834Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:27.596520 containerd[1496]: time="2026-01-20T01:42:27.596445278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:27.598363 containerd[1496]: time="2026-01-20T01:42:27.598310691Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.786718903s" Jan 20 01:42:27.598431 containerd[1496]: time="2026-01-20T01:42:27.598371213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 20 01:42:29.038871 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 01:42:29.048171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:29.241073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:29.246543 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 01:42:29.359442 kubelet[2168]: E0120 01:42:29.359202 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 01:42:29.363339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 01:42:29.363611 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 01:42:30.036913 update_engine[1488]: I20260120 01:42:30.036124 1488 update_attempter.cc:509] Updating boot flags... Jan 20 01:42:30.111867 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2183) Jan 20 01:42:30.213872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2181) Jan 20 01:42:32.235685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:32.243220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:32.282136 systemd[1]: Reloading requested from client PID 2197 ('systemctl') (unit session-11.scope)... Jan 20 01:42:32.282185 systemd[1]: Reloading... Jan 20 01:42:32.500900 zram_generator::config[2239]: No configuration found. Jan 20 01:42:32.638387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:42:32.755777 systemd[1]: Reloading finished in 472 ms. Jan 20 01:42:32.840454 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:32.845553 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:42:32.846060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:32.852247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:33.020578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:33.031320 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:42:33.113950 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:42:33.113950 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:42:33.113950 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:42:33.114635 kubelet[2305]: I0120 01:42:33.114062 2305 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:42:33.408883 kubelet[2305]: I0120 01:42:33.407851 2305 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:42:33.408883 kubelet[2305]: I0120 01:42:33.407932 2305 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:42:33.408883 kubelet[2305]: I0120 01:42:33.408429 2305 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:42:33.450021 kubelet[2305]: E0120 01:42:33.449653 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.30.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:33.450279 kubelet[2305]: I0120 01:42:33.450253 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:42:33.469665 kubelet[2305]: E0120 01:42:33.469611 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:42:33.469932 kubelet[2305]: I0120 01:42:33.469878 2305 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 01:42:33.479593 kubelet[2305]: I0120 01:42:33.479559 2305 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:42:33.483422 kubelet[2305]: I0120 01:42:33.483369 2305 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:42:33.483902 kubelet[2305]: I0120 01:42:33.483559 2305 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-vpmg3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:42:33.486165 kubelet[2305]: I0120 01:42:33.486141 2305 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:42:33.486296 kubelet[2305]: I0120 01:42:33.486279 2305 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:42:33.487906 kubelet[2305]: I0120 01:42:33.487753 2305 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:42:33.492197 kubelet[2305]: I0120 01:42:33.491965 2305 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:42:33.492197 kubelet[2305]: I0120 01:42:33.492075 2305 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:42:33.492197 kubelet[2305]: I0120 01:42:33.492117 2305 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:42:33.492197 kubelet[2305]: I0120 01:42:33.492159 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:42:33.496519 kubelet[2305]: W0120 01:42:33.496120 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.30.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vpmg3.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:33.496697 kubelet[2305]: E0120 01:42:33.496665 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.30.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vpmg3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:33.498218 kubelet[2305]: I0120 01:42:33.498191 2305 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:42:33.502189 kubelet[2305]: I0120 01:42:33.501994 2305 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:42:33.503666 kubelet[2305]: W0120 01:42:33.502819 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 01:42:33.505428 kubelet[2305]: I0120 01:42:33.505106 2305 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:42:33.505428 kubelet[2305]: I0120 01:42:33.505174 2305 server.go:1287] "Started kubelet" Jan 20 01:42:33.506589 kubelet[2305]: W0120 01:42:33.506112 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.30.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:33.506589 kubelet[2305]: E0120 01:42:33.506180 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.30.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:33.506589 kubelet[2305]: I0120 01:42:33.506340 2305 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:42:33.512861 kubelet[2305]: I0120 01:42:33.512123 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:42:33.512861 kubelet[2305]: I0120 01:42:33.512817 2305 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:42:33.514005 kubelet[2305]: I0120 01:42:33.513977 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:42:33.518234 kubelet[2305]: E0120 01:42:33.514281 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.30.54:6443/api/v1/namespaces/default/events\": dial tcp 10.230.30.54:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-vpmg3.gb1.brightbox.com.188c4cf17053cfb8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-vpmg3.gb1.brightbox.com,UID:srv-vpmg3.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-vpmg3.gb1.brightbox.com,},FirstTimestamp:2026-01-20 01:42:33.505132472 +0000 UTC m=+0.468446982,LastTimestamp:2026-01-20 01:42:33.505132472 +0000 UTC m=+0.468446982,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-vpmg3.gb1.brightbox.com,}" Jan 20 01:42:33.526126 kubelet[2305]: I0120 01:42:33.526088 2305 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:42:33.529262 kubelet[2305]: I0120 01:42:33.526210 2305 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:42:33.529567 kubelet[2305]: I0120 01:42:33.529540 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:42:33.532420 kubelet[2305]: I0120 01:42:33.526241 2305 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:42:33.532634 kubelet[2305]: I0120 01:42:33.532614 2305 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:42:33.532745 kubelet[2305]: E0120 01:42:33.526427 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" Jan 20 01:42:33.535226 kubelet[2305]: E0120 01:42:33.534319 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vpmg3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.54:6443: connect: connection refused" interval="200ms" Jan 20 01:42:33.535226 kubelet[2305]: I0120 01:42:33.534587 2305 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:42:33.535226 kubelet[2305]: I0120 01:42:33.534717 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:42:33.540306 kubelet[2305]: W0120 01:42:33.540258 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.30.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:33.540504 kubelet[2305]: E0120 01:42:33.540464 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.30.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:33.548236 kubelet[2305]: I0120 01:42:33.548212 2305 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:42:33.553962 systemd[1]: Started sshd@10-10.230.30.54:22-164.92.217.44:42802.service - OpenSSH per-connection server daemon (164.92.217.44:42802). Jan 20 01:42:33.579366 kubelet[2305]: E0120 01:42:33.579060 2305 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:42:33.590131 kubelet[2305]: I0120 01:42:33.589956 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:42:33.593308 kubelet[2305]: I0120 01:42:33.592720 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:42:33.593308 kubelet[2305]: I0120 01:42:33.592766 2305 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:42:33.593308 kubelet[2305]: I0120 01:42:33.592909 2305 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:42:33.593308 kubelet[2305]: I0120 01:42:33.592926 2305 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:42:33.593308 kubelet[2305]: E0120 01:42:33.593042 2305 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:42:33.594990 kubelet[2305]: W0120 01:42:33.594820 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.30.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:33.595151 kubelet[2305]: E0120 01:42:33.595008 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.30.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:33.608275 kubelet[2305]: I0120 01:42:33.608241 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:42:33.608275 kubelet[2305]: I0120 01:42:33.608266 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:42:33.608503 kubelet[2305]: I0120 01:42:33.608308 2305 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:42:33.610645 kubelet[2305]: I0120 01:42:33.610571 2305 policy_none.go:49] "None policy: Start" Jan 20 01:42:33.610645 kubelet[2305]: I0120 01:42:33.610628 2305 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:42:33.610796 kubelet[2305]: I0120 01:42:33.610660 2305 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:42:33.619579 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 01:42:33.633852 kubelet[2305]: E0120 01:42:33.633757 2305 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" Jan 20 01:42:33.640923 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 01:42:33.645341 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 01:42:33.655791 kubelet[2305]: I0120 01:42:33.655745 2305 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:42:33.657850 kubelet[2305]: I0120 01:42:33.656096 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:42:33.657850 kubelet[2305]: I0120 01:42:33.656128 2305 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:42:33.657850 kubelet[2305]: I0120 01:42:33.656554 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:42:33.661221 kubelet[2305]: E0120 01:42:33.661110 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:42:33.661301 kubelet[2305]: E0120 01:42:33.661203 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-vpmg3.gb1.brightbox.com\" not found" Jan 20 01:42:33.710785 systemd[1]: Created slice kubepods-burstable-pod668b971ab8e3e05397e803c0f0f9cda6.slice - libcontainer container kubepods-burstable-pod668b971ab8e3e05397e803c0f0f9cda6.slice. Jan 20 01:42:33.720379 kubelet[2305]: E0120 01:42:33.720301 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.729049 systemd[1]: Created slice kubepods-burstable-pod9ff467063b9c42ff1ac295ba0bb4e21c.slice - libcontainer container kubepods-burstable-pod9ff467063b9c42ff1ac295ba0bb4e21c.slice. Jan 20 01:42:33.732256 kubelet[2305]: E0120 01:42:33.731846 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.735868 kubelet[2305]: I0120 01:42:33.735704 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ff467063b9c42ff1ac295ba0bb4e21c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" (UID: \"9ff467063b9c42ff1ac295ba0bb4e21c\") " pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.736356 kubelet[2305]: I0120 01:42:33.736217 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-flexvolume-dir\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.736356 kubelet[2305]: I0120 01:42:33.736292 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-kubeconfig\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.736356 kubelet[2305]: I0120 01:42:33.736327 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-k8s-certs\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.736591 systemd[1]: Created slice kubepods-burstable-poddc7d3484c067b9e9bd790ab2ef73cc4a.slice - libcontainer container kubepods-burstable-poddc7d3484c067b9e9bd790ab2ef73cc4a.slice. Jan 20 01:42:33.737040 kubelet[2305]: I0120 01:42:33.736622 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.737040 kubelet[2305]: I0120 01:42:33.736681 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/668b971ab8e3e05397e803c0f0f9cda6-kubeconfig\") pod \"kube-scheduler-srv-vpmg3.gb1.brightbox.com\" (UID: \"668b971ab8e3e05397e803c0f0f9cda6\") " pod="kube-system/kube-scheduler-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.737040 kubelet[2305]: I0120 01:42:33.736714 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ff467063b9c42ff1ac295ba0bb4e21c-ca-certs\") pod \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" (UID: \"9ff467063b9c42ff1ac295ba0bb4e21c\") " pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.737040 kubelet[2305]: I0120 01:42:33.736870 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ff467063b9c42ff1ac295ba0bb4e21c-k8s-certs\") pod \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" (UID: \"9ff467063b9c42ff1ac295ba0bb4e21c\") " pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.737040 kubelet[2305]: I0120 01:42:33.736908 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-ca-certs\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.738765 kubelet[2305]: E0120 01:42:33.735749 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vpmg3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.54:6443: connect: connection refused" interval="400ms" Jan 20 01:42:33.743096 kubelet[2305]: E0120 01:42:33.742724 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.759982 kubelet[2305]: I0120 01:42:33.759947 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.760414 sshd[2325]: Invalid user oracle from 164.92.217.44 port 42802 Jan 20 01:42:33.761564 kubelet[2305]: E0120 01:42:33.761467 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.54:6443/api/v1/nodes\": dial tcp 10.230.30.54:6443: connect: connection refused" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.807885 sshd[2325]: Connection closed by invalid user oracle 164.92.217.44 port 42802 [preauth] Jan 20 01:42:33.811078 systemd[1]: sshd@10-10.230.30.54:22-164.92.217.44:42802.service: Deactivated successfully. Jan 20 01:42:33.966204 kubelet[2305]: I0120 01:42:33.965867 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:33.967147 kubelet[2305]: E0120 01:42:33.967111 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.54:6443/api/v1/nodes\": dial tcp 10.230.30.54:6443: connect: connection refused" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:34.022601 containerd[1496]: time="2026-01-20T01:42:34.022500665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-vpmg3.gb1.brightbox.com,Uid:668b971ab8e3e05397e803c0f0f9cda6,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:34.041314 containerd[1496]: time="2026-01-20T01:42:34.041181630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-vpmg3.gb1.brightbox.com,Uid:9ff467063b9c42ff1ac295ba0bb4e21c,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:34.044736 containerd[1496]: time="2026-01-20T01:42:34.044676985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-vpmg3.gb1.brightbox.com,Uid:dc7d3484c067b9e9bd790ab2ef73cc4a,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:34.139849 kubelet[2305]: E0120 01:42:34.139631 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vpmg3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.54:6443: connect: connection refused" interval="800ms" Jan 20 01:42:34.344051 kubelet[2305]: W0120 01:42:34.343937 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.30.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:34.344348 kubelet[2305]: E0120 01:42:34.344108 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.30.54:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:34.371411 kubelet[2305]: I0120 01:42:34.370775 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:34.371411 kubelet[2305]: E0120 01:42:34.371357 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.54:6443/api/v1/nodes\": dial tcp 10.230.30.54:6443: connect: connection refused" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:34.538694 kubelet[2305]: W0120 01:42:34.538605 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.30.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:34.539102 kubelet[2305]: E0120 01:42:34.539068 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.30.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:34.649228 kubelet[2305]: W0120 01:42:34.648553 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.30.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:34.649228 kubelet[2305]: E0120 01:42:34.648627 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.30.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:34.675359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3267336607.mount: Deactivated successfully. Jan 20 01:42:34.684334 containerd[1496]: time="2026-01-20T01:42:34.683059382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:42:34.684334 containerd[1496]: time="2026-01-20T01:42:34.684080810Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 20 01:42:34.685868 containerd[1496]: time="2026-01-20T01:42:34.684888181Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:42:34.686953 containerd[1496]: time="2026-01-20T01:42:34.686913260Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:42:34.688288 containerd[1496]: time="2026-01-20T01:42:34.688231850Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:42:34.691295 containerd[1496]: time="2026-01-20T01:42:34.690328161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:42:34.691295 containerd[1496]: time="2026-01-20T01:42:34.690909932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 20 01:42:34.692025 containerd[1496]: time="2026-01-20T01:42:34.691988172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 01:42:34.695889 containerd[1496]: time="2026-01-20T01:42:34.695824607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 651.024514ms" Jan 20 01:42:34.699464 containerd[1496]: time="2026-01-20T01:42:34.699427581Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 676.736625ms" Jan 20 01:42:34.700383 containerd[1496]: time="2026-01-20T01:42:34.700348405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 659.014566ms" Jan 20 01:42:34.759258 kubelet[2305]: W0120 01:42:34.759177 2305 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.30.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vpmg3.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.30.54:6443: connect: connection refused Jan 20 01:42:34.764443 kubelet[2305]: E0120 01:42:34.763908 2305 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.30.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-vpmg3.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:34.941901 kubelet[2305]: E0120 01:42:34.940555 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.30.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-vpmg3.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.30.54:6443: connect: connection refused" interval="1.6s" Jan 20 01:42:34.945315 containerd[1496]: time="2026-01-20T01:42:34.944763357Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:34.945315 containerd[1496]: time="2026-01-20T01:42:34.944977497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:34.945315 containerd[1496]: time="2026-01-20T01:42:34.944997206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.945721 containerd[1496]: time="2026-01-20T01:42:34.945257839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.957171 containerd[1496]: time="2026-01-20T01:42:34.957059262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:34.958339 containerd[1496]: time="2026-01-20T01:42:34.957889892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:34.958722 containerd[1496]: time="2026-01-20T01:42:34.958656087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.959035 containerd[1496]: time="2026-01-20T01:42:34.958946791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:34.959331 containerd[1496]: time="2026-01-20T01:42:34.959267270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.959520 containerd[1496]: time="2026-01-20T01:42:34.959476797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:34.959763 containerd[1496]: time="2026-01-20T01:42:34.959692593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.961525 containerd[1496]: time="2026-01-20T01:42:34.961432489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:34.997086 systemd[1]: Started cri-containerd-7a1c5c233f9f0b8cbf3ad48b50781ee03c7510121c677071f95bff6332229d9a.scope - libcontainer container 7a1c5c233f9f0b8cbf3ad48b50781ee03c7510121c677071f95bff6332229d9a. Jan 20 01:42:35.005083 systemd[1]: Started cri-containerd-5f93e7efa848cb2fc53451fe2779bf8e2daa5c644868a2f1fbc25302fa1bc806.scope - libcontainer container 5f93e7efa848cb2fc53451fe2779bf8e2daa5c644868a2f1fbc25302fa1bc806. Jan 20 01:42:35.028213 systemd[1]: Started cri-containerd-3ca2c76f7b2309f50b0306590423a31c5405f2700138849b0474470f6ae26693.scope - libcontainer container 3ca2c76f7b2309f50b0306590423a31c5405f2700138849b0474470f6ae26693. Jan 20 01:42:35.127800 containerd[1496]: time="2026-01-20T01:42:35.127479538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-vpmg3.gb1.brightbox.com,Uid:9ff467063b9c42ff1ac295ba0bb4e21c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a1c5c233f9f0b8cbf3ad48b50781ee03c7510121c677071f95bff6332229d9a\"" Jan 20 01:42:35.145201 containerd[1496]: time="2026-01-20T01:42:35.144888317Z" level=info msg="CreateContainer within sandbox \"7a1c5c233f9f0b8cbf3ad48b50781ee03c7510121c677071f95bff6332229d9a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 01:42:35.162986 containerd[1496]: time="2026-01-20T01:42:35.162933158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-vpmg3.gb1.brightbox.com,Uid:668b971ab8e3e05397e803c0f0f9cda6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f93e7efa848cb2fc53451fe2779bf8e2daa5c644868a2f1fbc25302fa1bc806\"" Jan 20 01:42:35.163963 containerd[1496]: time="2026-01-20T01:42:35.163910639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-vpmg3.gb1.brightbox.com,Uid:dc7d3484c067b9e9bd790ab2ef73cc4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ca2c76f7b2309f50b0306590423a31c5405f2700138849b0474470f6ae26693\"" Jan 20 01:42:35.167893 containerd[1496]: time="2026-01-20T01:42:35.167391432Z" level=info msg="CreateContainer within sandbox \"3ca2c76f7b2309f50b0306590423a31c5405f2700138849b0474470f6ae26693\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 01:42:35.167893 containerd[1496]: time="2026-01-20T01:42:35.167756928Z" level=info msg="CreateContainer within sandbox \"5f93e7efa848cb2fc53451fe2779bf8e2daa5c644868a2f1fbc25302fa1bc806\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 01:42:35.176408 kubelet[2305]: I0120 01:42:35.176374 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:35.177793 kubelet[2305]: E0120 01:42:35.177716 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.30.54:6443/api/v1/nodes\": dial tcp 10.230.30.54:6443: connect: connection refused" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:35.189809 containerd[1496]: time="2026-01-20T01:42:35.189765083Z" level=info msg="CreateContainer within sandbox \"7a1c5c233f9f0b8cbf3ad48b50781ee03c7510121c677071f95bff6332229d9a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4e1f1cae0a39de82d676f637576532a3545d4abe0e6c24227aa7288531c0e826\"" Jan 20 01:42:35.190510 containerd[1496]: time="2026-01-20T01:42:35.190478369Z" level=info msg="StartContainer for \"4e1f1cae0a39de82d676f637576532a3545d4abe0e6c24227aa7288531c0e826\"" Jan 20 01:42:35.218880 containerd[1496]: time="2026-01-20T01:42:35.218094574Z" level=info msg="CreateContainer within sandbox \"3ca2c76f7b2309f50b0306590423a31c5405f2700138849b0474470f6ae26693\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d4a5411d45a0966807f6c3083c5a14a403a67eafc199e6447566f0046ba95a76\"" Jan 20 01:42:35.220313 containerd[1496]: time="2026-01-20T01:42:35.220169803Z" level=info msg="StartContainer for \"d4a5411d45a0966807f6c3083c5a14a403a67eafc199e6447566f0046ba95a76\"" Jan 20 01:42:35.221572 containerd[1496]: time="2026-01-20T01:42:35.221429217Z" level=info msg="CreateContainer within sandbox \"5f93e7efa848cb2fc53451fe2779bf8e2daa5c644868a2f1fbc25302fa1bc806\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6328773aff567ecdc1bde4f8336d355a0461d175403b9b13d555283bdf734a35\"" Jan 20 01:42:35.222006 containerd[1496]: time="2026-01-20T01:42:35.221921672Z" level=info msg="StartContainer for \"6328773aff567ecdc1bde4f8336d355a0461d175403b9b13d555283bdf734a35\"" Jan 20 01:42:35.247903 systemd[1]: Started cri-containerd-4e1f1cae0a39de82d676f637576532a3545d4abe0e6c24227aa7288531c0e826.scope - libcontainer container 4e1f1cae0a39de82d676f637576532a3545d4abe0e6c24227aa7288531c0e826. Jan 20 01:42:35.310074 systemd[1]: Started cri-containerd-d4a5411d45a0966807f6c3083c5a14a403a67eafc199e6447566f0046ba95a76.scope - libcontainer container d4a5411d45a0966807f6c3083c5a14a403a67eafc199e6447566f0046ba95a76. Jan 20 01:42:35.324567 systemd[1]: Started cri-containerd-6328773aff567ecdc1bde4f8336d355a0461d175403b9b13d555283bdf734a35.scope - libcontainer container 6328773aff567ecdc1bde4f8336d355a0461d175403b9b13d555283bdf734a35. Jan 20 01:42:35.370266 containerd[1496]: time="2026-01-20T01:42:35.370198428Z" level=info msg="StartContainer for \"4e1f1cae0a39de82d676f637576532a3545d4abe0e6c24227aa7288531c0e826\" returns successfully" Jan 20 01:42:35.424860 containerd[1496]: time="2026-01-20T01:42:35.423549114Z" level=info msg="StartContainer for \"6328773aff567ecdc1bde4f8336d355a0461d175403b9b13d555283bdf734a35\" returns successfully" Jan 20 01:42:35.452068 containerd[1496]: time="2026-01-20T01:42:35.452002843Z" level=info msg="StartContainer for \"d4a5411d45a0966807f6c3083c5a14a403a67eafc199e6447566f0046ba95a76\" returns successfully" Jan 20 01:42:35.546875 kubelet[2305]: E0120 01:42:35.546667 2305 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.30.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.30.54:6443: connect: connection refused" logger="UnhandledError" Jan 20 01:42:35.620210 kubelet[2305]: E0120 01:42:35.620144 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:35.625452 kubelet[2305]: E0120 01:42:35.625418 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:35.630960 kubelet[2305]: E0120 01:42:35.630931 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:36.636910 kubelet[2305]: E0120 01:42:36.635896 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:36.638918 kubelet[2305]: E0120 01:42:36.637714 2305 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:36.786870 kubelet[2305]: I0120 01:42:36.784229 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.509304 kubelet[2305]: I0120 01:42:38.508974 2305 apiserver.go:52] "Watching apiserver" Jan 20 01:42:38.624376 kubelet[2305]: E0120 01:42:38.624321 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-vpmg3.gb1.brightbox.com\" not found" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.633078 kubelet[2305]: I0120 01:42:38.632816 2305 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:42:38.701258 kubelet[2305]: I0120 01:42:38.700659 2305 kubelet_node_status.go:78] "Successfully registered node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.701258 kubelet[2305]: E0120 01:42:38.700720 2305 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-vpmg3.gb1.brightbox.com\": node \"srv-vpmg3.gb1.brightbox.com\" not found" Jan 20 01:42:38.727231 kubelet[2305]: I0120 01:42:38.727182 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.793450 kubelet[2305]: E0120 01:42:38.790930 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-vpmg3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.793450 kubelet[2305]: I0120 01:42:38.791043 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.804475 kubelet[2305]: E0120 01:42:38.804412 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.804665 kubelet[2305]: I0120 01:42:38.804474 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:38.813544 kubelet[2305]: E0120 01:42:38.813491 2305 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:40.577273 systemd[1]: Reloading requested from client PID 2586 ('systemctl') (unit session-11.scope)... Jan 20 01:42:40.577929 systemd[1]: Reloading... Jan 20 01:42:40.700888 zram_generator::config[2621]: No configuration found. Jan 20 01:42:40.775775 kubelet[2305]: I0120 01:42:40.775703 2305 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:40.794878 kubelet[2305]: W0120 01:42:40.793134 2305 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:40.931988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 20 01:42:41.067182 systemd[1]: Reloading finished in 488 ms. Jan 20 01:42:41.134696 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:41.147442 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 01:42:41.147894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:41.148070 systemd[1]: kubelet.service: Consumed 1.073s CPU time, 126.9M memory peak, 0B memory swap peak. Jan 20 01:42:41.156242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 01:42:41.450424 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 01:42:41.463176 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 01:42:41.546728 kubelet[2689]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:42:41.546728 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 01:42:41.546728 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 01:42:41.551189 kubelet[2689]: I0120 01:42:41.550860 2689 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 01:42:41.563858 kubelet[2689]: I0120 01:42:41.562737 2689 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 01:42:41.563858 kubelet[2689]: I0120 01:42:41.562767 2689 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 01:42:41.563858 kubelet[2689]: I0120 01:42:41.563643 2689 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 01:42:41.567355 kubelet[2689]: I0120 01:42:41.567325 2689 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 01:42:41.573443 kubelet[2689]: I0120 01:42:41.572609 2689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 01:42:41.586553 kubelet[2689]: E0120 01:42:41.585419 2689 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 20 01:42:41.586553 kubelet[2689]: I0120 01:42:41.585473 2689 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 20 01:42:41.592927 kubelet[2689]: I0120 01:42:41.591948 2689 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 01:42:41.592927 kubelet[2689]: I0120 01:42:41.592321 2689 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 01:42:41.592927 kubelet[2689]: I0120 01:42:41.592363 2689 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-vpmg3.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 01:42:41.592927 kubelet[2689]: I0120 01:42:41.592620 2689 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 01:42:41.593361 kubelet[2689]: I0120 01:42:41.592637 2689 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 01:42:41.593361 kubelet[2689]: I0120 01:42:41.592737 2689 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:42:41.593361 kubelet[2689]: I0120 01:42:41.593002 2689 kubelet.go:446] "Attempting to sync node with API server" Jan 20 01:42:41.593361 kubelet[2689]: I0120 01:42:41.593042 2689 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 01:42:41.593361 kubelet[2689]: I0120 01:42:41.593075 2689 kubelet.go:352] "Adding apiserver pod source" Jan 20 01:42:41.593361 kubelet[2689]: I0120 01:42:41.593098 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 01:42:41.596866 kubelet[2689]: I0120 01:42:41.594410 2689 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 20 01:42:41.596866 kubelet[2689]: I0120 01:42:41.595080 2689 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 01:42:41.596866 kubelet[2689]: I0120 01:42:41.596540 2689 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 01:42:41.596866 kubelet[2689]: I0120 01:42:41.596598 2689 server.go:1287] "Started kubelet" Jan 20 01:42:41.614894 kubelet[2689]: I0120 01:42:41.614363 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 01:42:41.621115 kubelet[2689]: I0120 01:42:41.620457 2689 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 01:42:41.649468 kubelet[2689]: I0120 01:42:41.649297 2689 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 01:42:41.653642 kubelet[2689]: I0120 01:42:41.653125 2689 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 01:42:41.653642 kubelet[2689]: I0120 01:42:41.653512 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 01:42:41.658334 kubelet[2689]: I0120 01:42:41.656223 2689 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 01:42:41.658334 kubelet[2689]: E0120 01:42:41.656417 2689 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-vpmg3.gb1.brightbox.com\" not found" Jan 20 01:42:41.660548 kubelet[2689]: I0120 01:42:41.659988 2689 server.go:479] "Adding debug handlers to kubelet server" Jan 20 01:42:41.663967 kubelet[2689]: I0120 01:42:41.663499 2689 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 01:42:41.663967 kubelet[2689]: I0120 01:42:41.663735 2689 reconciler.go:26] "Reconciler: start to sync state" Jan 20 01:42:41.667246 kubelet[2689]: I0120 01:42:41.666551 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 01:42:41.669239 kubelet[2689]: I0120 01:42:41.669100 2689 factory.go:221] Registration of the systemd container factory successfully Jan 20 01:42:41.669326 kubelet[2689]: I0120 01:42:41.669250 2689 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 01:42:41.669649 kubelet[2689]: I0120 01:42:41.669502 2689 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 01:42:41.675639 kubelet[2689]: I0120 01:42:41.671346 2689 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 01:42:41.675639 kubelet[2689]: I0120 01:42:41.671395 2689 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 01:42:41.675639 kubelet[2689]: I0120 01:42:41.671411 2689 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 01:42:41.675639 kubelet[2689]: E0120 01:42:41.671478 2689 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 01:42:41.679217 kubelet[2689]: I0120 01:42:41.679182 2689 factory.go:221] Registration of the containerd container factory successfully Jan 20 01:42:41.696335 kubelet[2689]: E0120 01:42:41.694462 2689 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 01:42:41.772375 kubelet[2689]: E0120 01:42:41.772203 2689 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 20 01:42:41.786947 kubelet[2689]: I0120 01:42:41.786886 2689 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 01:42:41.788345 kubelet[2689]: I0120 01:42:41.787728 2689 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 01:42:41.788345 kubelet[2689]: I0120 01:42:41.787778 2689 state_mem.go:36] "Initialized new in-memory state store" Jan 20 01:42:41.788345 kubelet[2689]: I0120 01:42:41.788177 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 01:42:41.790895 kubelet[2689]: I0120 01:42:41.788202 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 01:42:41.790895 kubelet[2689]: I0120 01:42:41.790491 2689 policy_none.go:49] "None policy: Start" Jan 20 01:42:41.790895 kubelet[2689]: I0120 01:42:41.790524 2689 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 01:42:41.790895 kubelet[2689]: I0120 01:42:41.790559 2689 state_mem.go:35] "Initializing new in-memory state store" Jan 20 01:42:41.790895 kubelet[2689]: I0120 01:42:41.790767 2689 state_mem.go:75] "Updated machine memory state" Jan 20 01:42:41.801043 kubelet[2689]: I0120 01:42:41.800621 2689 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 01:42:41.801043 kubelet[2689]: I0120 01:42:41.800916 2689 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 01:42:41.801043 kubelet[2689]: I0120 01:42:41.800938 2689 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 01:42:41.803365 kubelet[2689]: I0120 01:42:41.803337 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 01:42:41.808522 kubelet[2689]: E0120 01:42:41.808485 2689 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 01:42:41.932016 kubelet[2689]: I0120 01:42:41.931963 2689 kubelet_node_status.go:75] "Attempting to register node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.945081 kubelet[2689]: I0120 01:42:41.945030 2689 kubelet_node_status.go:124] "Node was previously registered" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.945218 kubelet[2689]: I0120 01:42:41.945187 2689 kubelet_node_status.go:78] "Successfully registered node" node="srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.975991 kubelet[2689]: I0120 01:42:41.975446 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.981501 kubelet[2689]: I0120 01:42:41.981285 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.984256 kubelet[2689]: I0120 01:42:41.983655 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.991186 kubelet[2689]: W0120 01:42:41.991026 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:41.991186 kubelet[2689]: E0120 01:42:41.991107 2689 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:41.991878 kubelet[2689]: W0120 01:42:41.991534 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:41.992208 kubelet[2689]: W0120 01:42:41.992182 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:42.071070 kubelet[2689]: I0120 01:42:42.071008 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-flexvolume-dir\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071070 kubelet[2689]: I0120 01:42:42.071069 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-k8s-certs\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071340 kubelet[2689]: I0120 01:42:42.071105 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-kubeconfig\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071340 kubelet[2689]: I0120 01:42:42.071134 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071340 kubelet[2689]: I0120 01:42:42.071166 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/668b971ab8e3e05397e803c0f0f9cda6-kubeconfig\") pod \"kube-scheduler-srv-vpmg3.gb1.brightbox.com\" (UID: \"668b971ab8e3e05397e803c0f0f9cda6\") " pod="kube-system/kube-scheduler-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071340 kubelet[2689]: I0120 01:42:42.071192 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ff467063b9c42ff1ac295ba0bb4e21c-ca-certs\") pod \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" (UID: \"9ff467063b9c42ff1ac295ba0bb4e21c\") " pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071340 kubelet[2689]: I0120 01:42:42.071218 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ff467063b9c42ff1ac295ba0bb4e21c-k8s-certs\") pod \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" (UID: \"9ff467063b9c42ff1ac295ba0bb4e21c\") " pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071633 kubelet[2689]: I0120 01:42:42.071257 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ff467063b9c42ff1ac295ba0bb4e21c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" (UID: \"9ff467063b9c42ff1ac295ba0bb4e21c\") " pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.071633 kubelet[2689]: I0120 01:42:42.071286 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc7d3484c067b9e9bd790ab2ef73cc4a-ca-certs\") pod \"kube-controller-manager-srv-vpmg3.gb1.brightbox.com\" (UID: \"dc7d3484c067b9e9bd790ab2ef73cc4a\") " pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.604192 kubelet[2689]: I0120 01:42:42.604136 2689 apiserver.go:52] "Watching apiserver" Jan 20 01:42:42.664317 kubelet[2689]: I0120 01:42:42.664245 2689 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 01:42:42.738622 kubelet[2689]: I0120 01:42:42.737988 2689 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.753068 kubelet[2689]: W0120 01:42:42.753028 2689 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 20 01:42:42.753290 kubelet[2689]: E0120 01:42:42.753105 2689 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-vpmg3.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" Jan 20 01:42:42.786960 kubelet[2689]: I0120 01:42:42.786654 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-vpmg3.gb1.brightbox.com" podStartSLOduration=1.786614425 podStartE2EDuration="1.786614425s" podCreationTimestamp="2026-01-20 01:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:42.778443257 +0000 UTC m=+1.304870655" watchObservedRunningTime="2026-01-20 01:42:42.786614425 +0000 UTC m=+1.313041816" Jan 20 01:42:42.816302 kubelet[2689]: I0120 01:42:42.816069 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-vpmg3.gb1.brightbox.com" podStartSLOduration=2.816050105 podStartE2EDuration="2.816050105s" podCreationTimestamp="2026-01-20 01:42:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:42.802768154 +0000 UTC m=+1.329195540" watchObservedRunningTime="2026-01-20 01:42:42.816050105 +0000 UTC m=+1.342477484" Jan 20 01:42:42.817235 kubelet[2689]: I0120 01:42:42.816944 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-vpmg3.gb1.brightbox.com" podStartSLOduration=1.816933313 podStartE2EDuration="1.816933313s" podCreationTimestamp="2026-01-20 01:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:42.812112748 +0000 UTC m=+1.338540144" watchObservedRunningTime="2026-01-20 01:42:42.816933313 +0000 UTC m=+1.343360711" Jan 20 01:42:46.812715 kubelet[2689]: I0120 01:42:46.812555 2689 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 01:42:46.814940 containerd[1496]: time="2026-01-20T01:42:46.814379876Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 01:42:46.816759 kubelet[2689]: I0120 01:42:46.815248 2689 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 01:42:47.612967 kubelet[2689]: I0120 01:42:47.610095 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9afd71f-fcde-463b-b2ed-1cc3f2f37eee-kube-proxy\") pod \"kube-proxy-8r8q4\" (UID: \"e9afd71f-fcde-463b-b2ed-1cc3f2f37eee\") " pod="kube-system/kube-proxy-8r8q4" Jan 20 01:42:47.612967 kubelet[2689]: I0120 01:42:47.610193 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9afd71f-fcde-463b-b2ed-1cc3f2f37eee-lib-modules\") pod \"kube-proxy-8r8q4\" (UID: \"e9afd71f-fcde-463b-b2ed-1cc3f2f37eee\") " pod="kube-system/kube-proxy-8r8q4" Jan 20 01:42:47.612967 kubelet[2689]: I0120 01:42:47.610236 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9afd71f-fcde-463b-b2ed-1cc3f2f37eee-xtables-lock\") pod \"kube-proxy-8r8q4\" (UID: \"e9afd71f-fcde-463b-b2ed-1cc3f2f37eee\") " pod="kube-system/kube-proxy-8r8q4" Jan 20 01:42:47.612967 kubelet[2689]: I0120 01:42:47.610288 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcn7\" (UniqueName: \"kubernetes.io/projected/e9afd71f-fcde-463b-b2ed-1cc3f2f37eee-kube-api-access-pwcn7\") pod \"kube-proxy-8r8q4\" (UID: \"e9afd71f-fcde-463b-b2ed-1cc3f2f37eee\") " pod="kube-system/kube-proxy-8r8q4" Jan 20 01:42:47.612334 systemd[1]: Created slice kubepods-besteffort-pode9afd71f_fcde_463b_b2ed_1cc3f2f37eee.slice - libcontainer container kubepods-besteffort-pode9afd71f_fcde_463b_b2ed_1cc3f2f37eee.slice. Jan 20 01:42:47.887965 systemd[1]: Created slice kubepods-besteffort-pod3a3003af_d81f_4e1a_b8c7_e4aa88a57893.slice - libcontainer container kubepods-besteffort-pod3a3003af_d81f_4e1a_b8c7_e4aa88a57893.slice. Jan 20 01:42:47.915679 kubelet[2689]: I0120 01:42:47.913240 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9tcf\" (UniqueName: \"kubernetes.io/projected/3a3003af-d81f-4e1a-b8c7-e4aa88a57893-kube-api-access-p9tcf\") pod \"tigera-operator-7dcd859c48-xkhzh\" (UID: \"3a3003af-d81f-4e1a-b8c7-e4aa88a57893\") " pod="tigera-operator/tigera-operator-7dcd859c48-xkhzh" Jan 20 01:42:47.915679 kubelet[2689]: I0120 01:42:47.913328 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a3003af-d81f-4e1a-b8c7-e4aa88a57893-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xkhzh\" (UID: \"3a3003af-d81f-4e1a-b8c7-e4aa88a57893\") " pod="tigera-operator/tigera-operator-7dcd859c48-xkhzh" Jan 20 01:42:47.926047 containerd[1496]: time="2026-01-20T01:42:47.925913133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8r8q4,Uid:e9afd71f-fcde-463b-b2ed-1cc3f2f37eee,Namespace:kube-system,Attempt:0,}" Jan 20 01:42:47.978557 containerd[1496]: time="2026-01-20T01:42:47.977387157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:47.978557 containerd[1496]: time="2026-01-20T01:42:47.977531963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:47.978557 containerd[1496]: time="2026-01-20T01:42:47.977563898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:47.978557 containerd[1496]: time="2026-01-20T01:42:47.977740018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.022040 systemd[1]: Started cri-containerd-f413654de234b3695f3e598c597c822febbb4b2bc05b324cc87f62b0aec089db.scope - libcontainer container f413654de234b3695f3e598c597c822febbb4b2bc05b324cc87f62b0aec089db. Jan 20 01:42:48.074163 containerd[1496]: time="2026-01-20T01:42:48.074075716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8r8q4,Uid:e9afd71f-fcde-463b-b2ed-1cc3f2f37eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"f413654de234b3695f3e598c597c822febbb4b2bc05b324cc87f62b0aec089db\"" Jan 20 01:42:48.080922 containerd[1496]: time="2026-01-20T01:42:48.080709944Z" level=info msg="CreateContainer within sandbox \"f413654de234b3695f3e598c597c822febbb4b2bc05b324cc87f62b0aec089db\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 01:42:48.102979 containerd[1496]: time="2026-01-20T01:42:48.102932976Z" level=info msg="CreateContainer within sandbox \"f413654de234b3695f3e598c597c822febbb4b2bc05b324cc87f62b0aec089db\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d80b090d444e84fc052451dc1e32b33dab1329acce11257e0fb0b9295845e3e3\"" Jan 20 01:42:48.105274 containerd[1496]: time="2026-01-20T01:42:48.103784080Z" level=info msg="StartContainer for \"d80b090d444e84fc052451dc1e32b33dab1329acce11257e0fb0b9295845e3e3\"" Jan 20 01:42:48.154135 systemd[1]: Started cri-containerd-d80b090d444e84fc052451dc1e32b33dab1329acce11257e0fb0b9295845e3e3.scope - libcontainer container d80b090d444e84fc052451dc1e32b33dab1329acce11257e0fb0b9295845e3e3. Jan 20 01:42:48.201581 containerd[1496]: time="2026-01-20T01:42:48.201525839Z" level=info msg="StartContainer for \"d80b090d444e84fc052451dc1e32b33dab1329acce11257e0fb0b9295845e3e3\" returns successfully" Jan 20 01:42:48.208766 containerd[1496]: time="2026-01-20T01:42:48.207432895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xkhzh,Uid:3a3003af-d81f-4e1a-b8c7-e4aa88a57893,Namespace:tigera-operator,Attempt:0,}" Jan 20 01:42:48.246860 containerd[1496]: time="2026-01-20T01:42:48.246689954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:42:48.247294 containerd[1496]: time="2026-01-20T01:42:48.246785382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:42:48.247294 containerd[1496]: time="2026-01-20T01:42:48.246809662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.247477 containerd[1496]: time="2026-01-20T01:42:48.247337197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:42:48.282065 systemd[1]: Started cri-containerd-67dcea343b1aee9bc1fc09f2e71a811d190f32a3b24b41882d999a07fc2414a2.scope - libcontainer container 67dcea343b1aee9bc1fc09f2e71a811d190f32a3b24b41882d999a07fc2414a2. Jan 20 01:42:48.356672 containerd[1496]: time="2026-01-20T01:42:48.356546591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xkhzh,Uid:3a3003af-d81f-4e1a-b8c7-e4aa88a57893,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"67dcea343b1aee9bc1fc09f2e71a811d190f32a3b24b41882d999a07fc2414a2\"" Jan 20 01:42:48.362620 containerd[1496]: time="2026-01-20T01:42:48.362395321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 01:42:50.440897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1192070845.mount: Deactivated successfully. Jan 20 01:42:50.905399 kubelet[2689]: I0120 01:42:50.904388 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8r8q4" podStartSLOduration=3.904319351 podStartE2EDuration="3.904319351s" podCreationTimestamp="2026-01-20 01:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:42:48.770166413 +0000 UTC m=+7.296593833" watchObservedRunningTime="2026-01-20 01:42:50.904319351 +0000 UTC m=+9.430746742" Jan 20 01:42:51.521889 containerd[1496]: time="2026-01-20T01:42:51.520703007Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:51.522659 containerd[1496]: time="2026-01-20T01:42:51.522097685Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 20 01:42:51.523123 containerd[1496]: time="2026-01-20T01:42:51.523059823Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:51.532943 containerd[1496]: time="2026-01-20T01:42:51.532884919Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:42:51.534561 containerd[1496]: time="2026-01-20T01:42:51.534520703Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.17204223s" Jan 20 01:42:51.534691 containerd[1496]: time="2026-01-20T01:42:51.534594419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 20 01:42:51.539799 containerd[1496]: time="2026-01-20T01:42:51.539328539Z" level=info msg="CreateContainer within sandbox \"67dcea343b1aee9bc1fc09f2e71a811d190f32a3b24b41882d999a07fc2414a2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 01:42:51.559314 containerd[1496]: time="2026-01-20T01:42:51.559242460Z" level=info msg="CreateContainer within sandbox \"67dcea343b1aee9bc1fc09f2e71a811d190f32a3b24b41882d999a07fc2414a2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fde0acf78a591d2eb9da585bdd6365013622216c6a8729c8fdf8c6dd307e2a01\"" Jan 20 01:42:51.559642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1950924987.mount: Deactivated successfully. Jan 20 01:42:51.562996 containerd[1496]: time="2026-01-20T01:42:51.562154125Z" level=info msg="StartContainer for \"fde0acf78a591d2eb9da585bdd6365013622216c6a8729c8fdf8c6dd307e2a01\"" Jan 20 01:42:51.625055 systemd[1]: Started cri-containerd-fde0acf78a591d2eb9da585bdd6365013622216c6a8729c8fdf8c6dd307e2a01.scope - libcontainer container fde0acf78a591d2eb9da585bdd6365013622216c6a8729c8fdf8c6dd307e2a01. Jan 20 01:42:51.670898 containerd[1496]: time="2026-01-20T01:42:51.670078837Z" level=info msg="StartContainer for \"fde0acf78a591d2eb9da585bdd6365013622216c6a8729c8fdf8c6dd307e2a01\" returns successfully" Jan 20 01:42:51.825207 kubelet[2689]: I0120 01:42:51.825101 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xkhzh" podStartSLOduration=1.648416152 podStartE2EDuration="4.825074866s" podCreationTimestamp="2026-01-20 01:42:47 +0000 UTC" firstStartedPulling="2026-01-20 01:42:48.359753838 +0000 UTC m=+6.886181218" lastFinishedPulling="2026-01-20 01:42:51.536412547 +0000 UTC m=+10.062839932" observedRunningTime="2026-01-20 01:42:51.804503015 +0000 UTC m=+10.330930423" watchObservedRunningTime="2026-01-20 01:42:51.825074866 +0000 UTC m=+10.351502252" Jan 20 01:42:57.972194 sudo[1767]: pam_unix(sudo:session): session closed for user root Jan 20 01:42:58.066246 sshd[1764]: pam_unix(sshd:session): session closed for user core Jan 20 01:42:58.082081 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Jan 20 01:42:58.082506 systemd[1]: sshd@9-10.230.30.54:22-20.161.92.111:43670.service: Deactivated successfully. Jan 20 01:42:58.091699 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 01:42:58.092090 systemd[1]: session-11.scope: Consumed 7.196s CPU time, 142.8M memory peak, 0B memory swap peak. Jan 20 01:42:58.096328 systemd-logind[1487]: Removed session 11. Jan 20 01:43:05.502627 systemd[1]: Created slice kubepods-besteffort-pode5ec8271_b411_454f_81ca_214c110339e2.slice - libcontainer container kubepods-besteffort-pode5ec8271_b411_454f_81ca_214c110339e2.slice. Jan 20 01:43:05.544696 kubelet[2689]: I0120 01:43:05.544576 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5ec8271-b411-454f-81ca-214c110339e2-typha-certs\") pod \"calico-typha-5599d94444-69zzz\" (UID: \"e5ec8271-b411-454f-81ca-214c110339e2\") " pod="calico-system/calico-typha-5599d94444-69zzz" Jan 20 01:43:05.545696 kubelet[2689]: I0120 01:43:05.544727 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5ec8271-b411-454f-81ca-214c110339e2-tigera-ca-bundle\") pod \"calico-typha-5599d94444-69zzz\" (UID: \"e5ec8271-b411-454f-81ca-214c110339e2\") " pod="calico-system/calico-typha-5599d94444-69zzz" Jan 20 01:43:05.545696 kubelet[2689]: I0120 01:43:05.544798 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpztv\" (UniqueName: \"kubernetes.io/projected/e5ec8271-b411-454f-81ca-214c110339e2-kube-api-access-qpztv\") pod \"calico-typha-5599d94444-69zzz\" (UID: \"e5ec8271-b411-454f-81ca-214c110339e2\") " pod="calico-system/calico-typha-5599d94444-69zzz" Jan 20 01:43:05.646139 kubelet[2689]: I0120 01:43:05.645023 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-cni-net-dir\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646139 kubelet[2689]: I0120 01:43:05.645074 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-lib-modules\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646139 kubelet[2689]: I0120 01:43:05.645120 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-cni-bin-dir\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646139 kubelet[2689]: I0120 01:43:05.645148 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-var-run-calico\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646139 kubelet[2689]: I0120 01:43:05.645191 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shmgz\" (UniqueName: \"kubernetes.io/projected/ba419904-99fd-481c-91ae-c92922814838-kube-api-access-shmgz\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646531 kubelet[2689]: I0120 01:43:05.645265 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ba419904-99fd-481c-91ae-c92922814838-node-certs\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646531 kubelet[2689]: I0120 01:43:05.645328 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-var-lib-calico\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646531 kubelet[2689]: I0120 01:43:05.645375 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-xtables-lock\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646531 kubelet[2689]: I0120 01:43:05.645406 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba419904-99fd-481c-91ae-c92922814838-tigera-ca-bundle\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646531 kubelet[2689]: I0120 01:43:05.645434 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-policysync\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646812 kubelet[2689]: I0120 01:43:05.645472 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-cni-log-dir\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.646812 kubelet[2689]: I0120 01:43:05.645499 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ba419904-99fd-481c-91ae-c92922814838-flexvol-driver-host\") pod \"calico-node-kb97z\" (UID: \"ba419904-99fd-481c-91ae-c92922814838\") " pod="calico-system/calico-node-kb97z" Jan 20 01:43:05.655928 systemd[1]: Created slice kubepods-besteffort-podba419904_99fd_481c_91ae_c92922814838.slice - libcontainer container kubepods-besteffort-podba419904_99fd_481c_91ae_c92922814838.slice. Jan 20 01:43:05.755903 kubelet[2689]: E0120 01:43:05.752610 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.755903 kubelet[2689]: W0120 01:43:05.752671 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.755903 kubelet[2689]: E0120 01:43:05.755468 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.756264 kubelet[2689]: E0120 01:43:05.756129 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.756264 kubelet[2689]: W0120 01:43:05.756165 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.756264 kubelet[2689]: E0120 01:43:05.756186 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.769443 kubelet[2689]: E0120 01:43:05.769404 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.769443 kubelet[2689]: W0120 01:43:05.769433 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.769443 kubelet[2689]: E0120 01:43:05.769459 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.780863 kubelet[2689]: E0120 01:43:05.779456 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:05.789456 kubelet[2689]: E0120 01:43:05.788949 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.789456 kubelet[2689]: W0120 01:43:05.788978 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.789456 kubelet[2689]: E0120 01:43:05.789003 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.814687 containerd[1496]: time="2026-01-20T01:43:05.814333630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5599d94444-69zzz,Uid:e5ec8271-b411-454f-81ca-214c110339e2,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:05.841768 kubelet[2689]: E0120 01:43:05.841618 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.841768 kubelet[2689]: W0120 01:43:05.841671 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.842546 kubelet[2689]: E0120 01:43:05.841899 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.843106 kubelet[2689]: E0120 01:43:05.843012 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.843106 kubelet[2689]: W0120 01:43:05.843034 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.843106 kubelet[2689]: E0120 01:43:05.843051 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.843656 kubelet[2689]: E0120 01:43:05.843623 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.843656 kubelet[2689]: W0120 01:43:05.843653 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.843814 kubelet[2689]: E0120 01:43:05.843670 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.845883 kubelet[2689]: E0120 01:43:05.844414 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.845883 kubelet[2689]: W0120 01:43:05.844438 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.845883 kubelet[2689]: E0120 01:43:05.844455 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.846956 kubelet[2689]: E0120 01:43:05.846441 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.846956 kubelet[2689]: W0120 01:43:05.846457 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.846956 kubelet[2689]: E0120 01:43:05.846473 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.849856 kubelet[2689]: E0120 01:43:05.848940 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.849856 kubelet[2689]: W0120 01:43:05.848970 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.849856 kubelet[2689]: E0120 01:43:05.848991 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.849856 kubelet[2689]: E0120 01:43:05.849308 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.849856 kubelet[2689]: W0120 01:43:05.849341 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.849856 kubelet[2689]: E0120 01:43:05.849370 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.851807 kubelet[2689]: E0120 01:43:05.851057 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.851807 kubelet[2689]: W0120 01:43:05.851079 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.851807 kubelet[2689]: E0120 01:43:05.851117 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.854581 kubelet[2689]: E0120 01:43:05.853574 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.854581 kubelet[2689]: W0120 01:43:05.853598 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.854581 kubelet[2689]: E0120 01:43:05.853617 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.855172 kubelet[2689]: E0120 01:43:05.855148 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.855172 kubelet[2689]: W0120 01:43:05.855172 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.855298 kubelet[2689]: E0120 01:43:05.855190 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.856874 kubelet[2689]: E0120 01:43:05.855597 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.856874 kubelet[2689]: W0120 01:43:05.855621 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.856874 kubelet[2689]: E0120 01:43:05.855637 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.857067 kubelet[2689]: E0120 01:43:05.856985 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.857067 kubelet[2689]: W0120 01:43:05.857000 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.857067 kubelet[2689]: E0120 01:43:05.857016 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.857414 kubelet[2689]: E0120 01:43:05.857387 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.857414 kubelet[2689]: W0120 01:43:05.857407 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.857519 kubelet[2689]: E0120 01:43:05.857423 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.860104 kubelet[2689]: E0120 01:43:05.860073 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.860104 kubelet[2689]: W0120 01:43:05.860101 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.860279 kubelet[2689]: E0120 01:43:05.860118 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.860475 kubelet[2689]: E0120 01:43:05.860438 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.860475 kubelet[2689]: W0120 01:43:05.860462 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.860584 kubelet[2689]: E0120 01:43:05.860478 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.861535 kubelet[2689]: E0120 01:43:05.861501 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.861535 kubelet[2689]: W0120 01:43:05.861523 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.861693 kubelet[2689]: E0120 01:43:05.861539 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.862578 kubelet[2689]: E0120 01:43:05.862549 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.862578 kubelet[2689]: W0120 01:43:05.862572 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.862720 kubelet[2689]: E0120 01:43:05.862604 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.863519 kubelet[2689]: E0120 01:43:05.863407 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.863519 kubelet[2689]: W0120 01:43:05.863429 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.863519 kubelet[2689]: E0120 01:43:05.863447 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.864098 kubelet[2689]: E0120 01:43:05.863945 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.864098 kubelet[2689]: W0120 01:43:05.863966 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.864098 kubelet[2689]: E0120 01:43:05.863986 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.865854 kubelet[2689]: E0120 01:43:05.864808 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.865854 kubelet[2689]: W0120 01:43:05.864862 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.865854 kubelet[2689]: E0120 01:43:05.864890 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.867862 kubelet[2689]: E0120 01:43:05.866326 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.867862 kubelet[2689]: W0120 01:43:05.866368 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.867862 kubelet[2689]: E0120 01:43:05.866386 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.867862 kubelet[2689]: I0120 01:43:05.866424 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd-kubelet-dir\") pod \"csi-node-driver-w59jj\" (UID: \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\") " pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:05.868082 kubelet[2689]: E0120 01:43:05.867882 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.868082 kubelet[2689]: W0120 01:43:05.867900 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.868082 kubelet[2689]: E0120 01:43:05.867936 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.868082 kubelet[2689]: I0120 01:43:05.867963 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5cnt\" (UniqueName: \"kubernetes.io/projected/c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd-kube-api-access-w5cnt\") pod \"csi-node-driver-w59jj\" (UID: \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\") " pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:05.868327 kubelet[2689]: E0120 01:43:05.868302 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.868327 kubelet[2689]: W0120 01:43:05.868325 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.868551 kubelet[2689]: E0120 01:43:05.868514 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.869014 kubelet[2689]: I0120 01:43:05.868557 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd-registration-dir\") pod \"csi-node-driver-w59jj\" (UID: \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\") " pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:05.869958 kubelet[2689]: E0120 01:43:05.869936 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.869958 kubelet[2689]: W0120 01:43:05.869957 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.870399 kubelet[2689]: E0120 01:43:05.870368 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.871727 kubelet[2689]: E0120 01:43:05.871616 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.871727 kubelet[2689]: W0120 01:43:05.871639 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.873942 kubelet[2689]: E0120 01:43:05.873915 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.873942 kubelet[2689]: W0120 01:43:05.873938 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.875256 kubelet[2689]: E0120 01:43:05.875233 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.875256 kubelet[2689]: W0120 01:43:05.875255 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.875493 kubelet[2689]: E0120 01:43:05.875106 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.875587 kubelet[2689]: E0120 01:43:05.875521 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.875587 kubelet[2689]: I0120 01:43:05.875555 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd-varrun\") pod \"csi-node-driver-w59jj\" (UID: \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\") " pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:05.875587 kubelet[2689]: E0120 01:43:05.875581 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.880939 kubelet[2689]: E0120 01:43:05.880905 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.880939 kubelet[2689]: W0120 01:43:05.880934 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.881150 kubelet[2689]: E0120 01:43:05.880960 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.883888 kubelet[2689]: E0120 01:43:05.883797 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.883888 kubelet[2689]: W0120 01:43:05.883846 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.883888 kubelet[2689]: E0120 01:43:05.883874 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.884097 kubelet[2689]: I0120 01:43:05.883910 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd-socket-dir\") pod \"csi-node-driver-w59jj\" (UID: \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\") " pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:05.885293 kubelet[2689]: E0120 01:43:05.885039 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.885293 kubelet[2689]: W0120 01:43:05.885276 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.885550 kubelet[2689]: E0120 01:43:05.885308 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.886901 kubelet[2689]: E0120 01:43:05.886794 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.887622 kubelet[2689]: W0120 01:43:05.886824 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.888572 kubelet[2689]: E0120 01:43:05.887589 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.888943 kubelet[2689]: E0120 01:43:05.888906 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.888943 kubelet[2689]: W0120 01:43:05.888929 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.889243 kubelet[2689]: E0120 01:43:05.888946 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.890212 kubelet[2689]: E0120 01:43:05.890149 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.890212 kubelet[2689]: W0120 01:43:05.890174 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.890498 kubelet[2689]: E0120 01:43:05.890199 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.892739 kubelet[2689]: E0120 01:43:05.892601 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.892739 kubelet[2689]: W0120 01:43:05.892631 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.892739 kubelet[2689]: E0120 01:43:05.892649 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.895009 kubelet[2689]: E0120 01:43:05.894919 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.895009 kubelet[2689]: W0120 01:43:05.894942 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.895009 kubelet[2689]: E0120 01:43:05.894982 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:05.904508 containerd[1496]: time="2026-01-20T01:43:05.903568224Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:05.904508 containerd[1496]: time="2026-01-20T01:43:05.904403379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:05.904508 containerd[1496]: time="2026-01-20T01:43:05.904434707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:05.906862 containerd[1496]: time="2026-01-20T01:43:05.905662904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:05.969327 containerd[1496]: time="2026-01-20T01:43:05.969244430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kb97z,Uid:ba419904-99fd-481c-91ae-c92922814838,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:05.998473 kubelet[2689]: E0120 01:43:05.998366 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:05.998473 kubelet[2689]: W0120 01:43:05.998411 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:05.998473 kubelet[2689]: E0120 01:43:05.998440 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.000553 kubelet[2689]: E0120 01:43:05.999994 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.000553 kubelet[2689]: W0120 01:43:06.000043 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.000553 kubelet[2689]: E0120 01:43:06.000063 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.000161 systemd[1]: Started cri-containerd-e450533ea0935a69347dedbcf85e8b48b7ec4f08722336f2791016fb860613a6.scope - libcontainer container e450533ea0935a69347dedbcf85e8b48b7ec4f08722336f2791016fb860613a6. Jan 20 01:43:06.003552 kubelet[2689]: E0120 01:43:06.003506 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.003552 kubelet[2689]: W0120 01:43:06.003552 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.003886 kubelet[2689]: E0120 01:43:06.003588 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.004186 kubelet[2689]: E0120 01:43:06.004146 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.004186 kubelet[2689]: W0120 01:43:06.004187 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.004303 kubelet[2689]: E0120 01:43:06.004211 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.005306 kubelet[2689]: E0120 01:43:06.005279 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.007312 kubelet[2689]: W0120 01:43:06.007005 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.007312 kubelet[2689]: E0120 01:43:06.007043 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.007723 kubelet[2689]: E0120 01:43:06.007571 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.007723 kubelet[2689]: W0120 01:43:06.007592 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.007723 kubelet[2689]: E0120 01:43:06.007608 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.013775 kubelet[2689]: E0120 01:43:06.012047 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.013775 kubelet[2689]: W0120 01:43:06.012069 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.013775 kubelet[2689]: E0120 01:43:06.013309 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.015412 kubelet[2689]: E0120 01:43:06.015382 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.015412 kubelet[2689]: W0120 01:43:06.015408 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.016246 kubelet[2689]: E0120 01:43:06.016210 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.018909 kubelet[2689]: E0120 01:43:06.018880 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.018909 kubelet[2689]: W0120 01:43:06.018909 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.020862 kubelet[2689]: E0120 01:43:06.019743 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.021298 kubelet[2689]: E0120 01:43:06.021273 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.021598 kubelet[2689]: W0120 01:43:06.021295 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.022717 kubelet[2689]: E0120 01:43:06.022631 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.022895 kubelet[2689]: E0120 01:43:06.022800 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.022895 kubelet[2689]: W0120 01:43:06.022853 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.023032 kubelet[2689]: E0120 01:43:06.022921 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.023305 kubelet[2689]: E0120 01:43:06.023259 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.023401 kubelet[2689]: W0120 01:43:06.023297 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.023596 kubelet[2689]: E0120 01:43:06.023569 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.024297 kubelet[2689]: E0120 01:43:06.024263 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.024297 kubelet[2689]: W0120 01:43:06.024284 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.024654 kubelet[2689]: E0120 01:43:06.024623 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.025444 kubelet[2689]: E0120 01:43:06.025415 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.025444 kubelet[2689]: W0120 01:43:06.025437 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.025566 kubelet[2689]: E0120 01:43:06.025497 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.026092 kubelet[2689]: E0120 01:43:06.026045 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.026092 kubelet[2689]: W0120 01:43:06.026084 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.026221 kubelet[2689]: E0120 01:43:06.026153 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.026597 kubelet[2689]: E0120 01:43:06.026555 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.026597 kubelet[2689]: W0120 01:43:06.026594 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.026720 kubelet[2689]: E0120 01:43:06.026669 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.027894 kubelet[2689]: E0120 01:43:06.027336 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.027894 kubelet[2689]: W0120 01:43:06.027367 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.027894 kubelet[2689]: E0120 01:43:06.027500 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.027894 kubelet[2689]: E0120 01:43:06.027755 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.027894 kubelet[2689]: W0120 01:43:06.027781 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.027894 kubelet[2689]: E0120 01:43:06.027892 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.028294 kubelet[2689]: E0120 01:43:06.028270 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.028294 kubelet[2689]: W0120 01:43:06.028292 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.028689 kubelet[2689]: E0120 01:43:06.028421 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.028896 kubelet[2689]: E0120 01:43:06.028866 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.028896 kubelet[2689]: W0120 01:43:06.028888 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.029009 kubelet[2689]: E0120 01:43:06.028911 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.029321 kubelet[2689]: E0120 01:43:06.029296 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.029321 kubelet[2689]: W0120 01:43:06.029317 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.029474 kubelet[2689]: E0120 01:43:06.029451 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.029927 kubelet[2689]: E0120 01:43:06.029906 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.029999 kubelet[2689]: W0120 01:43:06.029952 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.030855 kubelet[2689]: E0120 01:43:06.030319 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.030855 kubelet[2689]: E0120 01:43:06.030669 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.030855 kubelet[2689]: W0120 01:43:06.030684 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.030855 kubelet[2689]: E0120 01:43:06.030734 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.031547 kubelet[2689]: E0120 01:43:06.031520 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.031547 kubelet[2689]: W0120 01:43:06.031542 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.031668 kubelet[2689]: E0120 01:43:06.031560 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.033353 kubelet[2689]: E0120 01:43:06.033326 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.033462 kubelet[2689]: W0120 01:43:06.033381 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.033462 kubelet[2689]: E0120 01:43:06.033401 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.059910 containerd[1496]: time="2026-01-20T01:43:06.059062459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:06.059910 containerd[1496]: time="2026-01-20T01:43:06.059159346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:06.059910 containerd[1496]: time="2026-01-20T01:43:06.059177564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:06.059910 containerd[1496]: time="2026-01-20T01:43:06.059370669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:06.063955 kubelet[2689]: E0120 01:43:06.063805 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:06.063955 kubelet[2689]: W0120 01:43:06.063862 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:06.063955 kubelet[2689]: E0120 01:43:06.063896 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:06.094154 systemd[1]: Started cri-containerd-677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a.scope - libcontainer container 677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a. Jan 20 01:43:06.150206 containerd[1496]: time="2026-01-20T01:43:06.150144098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5599d94444-69zzz,Uid:e5ec8271-b411-454f-81ca-214c110339e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e450533ea0935a69347dedbcf85e8b48b7ec4f08722336f2791016fb860613a6\"" Jan 20 01:43:06.152762 containerd[1496]: time="2026-01-20T01:43:06.152563984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kb97z,Uid:ba419904-99fd-481c-91ae-c92922814838,Namespace:calico-system,Attempt:0,} returns sandbox id \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\"" Jan 20 01:43:06.167276 containerd[1496]: time="2026-01-20T01:43:06.167130083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 01:43:06.188341 systemd[1]: Started sshd@11-10.230.30.54:22-164.92.217.44:60336.service - OpenSSH per-connection server daemon (164.92.217.44:60336). Jan 20 01:43:06.473075 sshd[3259]: Invalid user oracle from 164.92.217.44 port 60336 Jan 20 01:43:06.515462 sshd[3259]: Connection closed by invalid user oracle 164.92.217.44 port 60336 [preauth] Jan 20 01:43:06.520120 systemd[1]: sshd@11-10.230.30.54:22-164.92.217.44:60336.service: Deactivated successfully. Jan 20 01:43:07.673119 kubelet[2689]: E0120 01:43:07.673009 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:07.764630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927488095.mount: Deactivated successfully. Jan 20 01:43:09.409525 containerd[1496]: time="2026-01-20T01:43:09.409381351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 20 01:43:09.425861 containerd[1496]: time="2026-01-20T01:43:09.425658218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.258424143s" Jan 20 01:43:09.425861 containerd[1496]: time="2026-01-20T01:43:09.425736874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 20 01:43:09.435198 containerd[1496]: time="2026-01-20T01:43:09.435122455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 01:43:09.439193 containerd[1496]: time="2026-01-20T01:43:09.439154767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:09.443716 containerd[1496]: time="2026-01-20T01:43:09.442711106Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:09.444488 containerd[1496]: time="2026-01-20T01:43:09.444357507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:09.486384 containerd[1496]: time="2026-01-20T01:43:09.485714214Z" level=info msg="CreateContainer within sandbox \"e450533ea0935a69347dedbcf85e8b48b7ec4f08722336f2791016fb860613a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 01:43:09.507296 containerd[1496]: time="2026-01-20T01:43:09.507037160Z" level=info msg="CreateContainer within sandbox \"e450533ea0935a69347dedbcf85e8b48b7ec4f08722336f2791016fb860613a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6db37e1b27efb3489c4177fd6972e66b82eca393214d58bcd4ce15019bf5abfd\"" Jan 20 01:43:09.508895 containerd[1496]: time="2026-01-20T01:43:09.508743377Z" level=info msg="StartContainer for \"6db37e1b27efb3489c4177fd6972e66b82eca393214d58bcd4ce15019bf5abfd\"" Jan 20 01:43:09.594110 systemd[1]: Started cri-containerd-6db37e1b27efb3489c4177fd6972e66b82eca393214d58bcd4ce15019bf5abfd.scope - libcontainer container 6db37e1b27efb3489c4177fd6972e66b82eca393214d58bcd4ce15019bf5abfd. Jan 20 01:43:09.665964 containerd[1496]: time="2026-01-20T01:43:09.665687927Z" level=info msg="StartContainer for \"6db37e1b27efb3489c4177fd6972e66b82eca393214d58bcd4ce15019bf5abfd\" returns successfully" Jan 20 01:43:09.673721 kubelet[2689]: E0120 01:43:09.673555 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:09.898464 kubelet[2689]: E0120 01:43:09.898371 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.898464 kubelet[2689]: W0120 01:43:09.898452 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.898889 kubelet[2689]: E0120 01:43:09.898572 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.900190 kubelet[2689]: E0120 01:43:09.900164 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.900275 kubelet[2689]: W0120 01:43:09.900205 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.900275 kubelet[2689]: E0120 01:43:09.900224 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.900578 kubelet[2689]: E0120 01:43:09.900557 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.900665 kubelet[2689]: W0120 01:43:09.900596 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.900665 kubelet[2689]: E0120 01:43:09.900615 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.901147 kubelet[2689]: E0120 01:43:09.901125 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.901225 kubelet[2689]: W0120 01:43:09.901163 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.901225 kubelet[2689]: E0120 01:43:09.901185 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.901636 kubelet[2689]: E0120 01:43:09.901614 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.903939 kubelet[2689]: W0120 01:43:09.903894 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.903939 kubelet[2689]: E0120 01:43:09.903929 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.904354 kubelet[2689]: E0120 01:43:09.904321 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.904354 kubelet[2689]: W0120 01:43:09.904346 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.904489 kubelet[2689]: E0120 01:43:09.904366 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.904745 kubelet[2689]: E0120 01:43:09.904698 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.904745 kubelet[2689]: W0120 01:43:09.904745 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.904928 kubelet[2689]: E0120 01:43:09.904762 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.905134 kubelet[2689]: E0120 01:43:09.905115 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.905205 kubelet[2689]: W0120 01:43:09.905153 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.905205 kubelet[2689]: E0120 01:43:09.905170 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.905557 kubelet[2689]: E0120 01:43:09.905536 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.905557 kubelet[2689]: W0120 01:43:09.905556 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.905697 kubelet[2689]: E0120 01:43:09.905572 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.908235 kubelet[2689]: E0120 01:43:09.908193 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.908235 kubelet[2689]: W0120 01:43:09.908223 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.909010 kubelet[2689]: E0120 01:43:09.908243 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.909010 kubelet[2689]: E0120 01:43:09.908534 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.909010 kubelet[2689]: W0120 01:43:09.908548 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.909010 kubelet[2689]: E0120 01:43:09.908564 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.909010 kubelet[2689]: E0120 01:43:09.908852 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.909010 kubelet[2689]: W0120 01:43:09.908866 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.909010 kubelet[2689]: E0120 01:43:09.908889 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.909671 kubelet[2689]: E0120 01:43:09.909179 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.909671 kubelet[2689]: W0120 01:43:09.909194 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.909671 kubelet[2689]: E0120 01:43:09.909208 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.909671 kubelet[2689]: E0120 01:43:09.909485 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.909671 kubelet[2689]: W0120 01:43:09.909499 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.909671 kubelet[2689]: E0120 01:43:09.909514 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.910187 kubelet[2689]: E0120 01:43:09.909755 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.910187 kubelet[2689]: W0120 01:43:09.909769 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.910187 kubelet[2689]: E0120 01:43:09.909792 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.979944 kubelet[2689]: I0120 01:43:09.979667 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5599d94444-69zzz" podStartSLOduration=1.701860639 podStartE2EDuration="4.979597488s" podCreationTimestamp="2026-01-20 01:43:05 +0000 UTC" firstStartedPulling="2026-01-20 01:43:06.15300385 +0000 UTC m=+24.679431235" lastFinishedPulling="2026-01-20 01:43:09.430740699 +0000 UTC m=+27.957168084" observedRunningTime="2026-01-20 01:43:09.977173795 +0000 UTC m=+28.503601193" watchObservedRunningTime="2026-01-20 01:43:09.979597488 +0000 UTC m=+28.506024880" Jan 20 01:43:09.983101 kubelet[2689]: E0120 01:43:09.982607 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.983101 kubelet[2689]: W0120 01:43:09.982632 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.983101 kubelet[2689]: E0120 01:43:09.982668 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.984854 kubelet[2689]: E0120 01:43:09.983518 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.984992 kubelet[2689]: W0120 01:43:09.984968 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.985135 kubelet[2689]: E0120 01:43:09.985101 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.985846 kubelet[2689]: E0120 01:43:09.985794 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.985930 kubelet[2689]: W0120 01:43:09.985870 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.985930 kubelet[2689]: E0120 01:43:09.985904 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.986329 kubelet[2689]: E0120 01:43:09.986276 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.986414 kubelet[2689]: W0120 01:43:09.986297 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.986414 kubelet[2689]: E0120 01:43:09.986360 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.986703 kubelet[2689]: E0120 01:43:09.986674 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.986703 kubelet[2689]: W0120 01:43:09.986695 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.986939 kubelet[2689]: E0120 01:43:09.986867 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.987030 kubelet[2689]: E0120 01:43:09.986988 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.987030 kubelet[2689]: W0120 01:43:09.987002 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.987165 kubelet[2689]: E0120 01:43:09.987137 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.987422 kubelet[2689]: E0120 01:43:09.987396 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.987422 kubelet[2689]: W0120 01:43:09.987417 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.987620 kubelet[2689]: E0120 01:43:09.987453 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.987870 kubelet[2689]: E0120 01:43:09.987751 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.987870 kubelet[2689]: W0120 01:43:09.987773 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.987870 kubelet[2689]: E0120 01:43:09.987797 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.988759 kubelet[2689]: E0120 01:43:09.988735 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.988759 kubelet[2689]: W0120 01:43:09.988756 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.988957 kubelet[2689]: E0120 01:43:09.988854 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.989293 kubelet[2689]: E0120 01:43:09.989268 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.989293 kubelet[2689]: W0120 01:43:09.989293 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.989492 kubelet[2689]: E0120 01:43:09.989451 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.989776 kubelet[2689]: E0120 01:43:09.989698 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.989776 kubelet[2689]: W0120 01:43:09.989716 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.991886 kubelet[2689]: E0120 01:43:09.989884 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.992218 kubelet[2689]: E0120 01:43:09.992029 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.992218 kubelet[2689]: W0120 01:43:09.992051 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.992218 kubelet[2689]: E0120 01:43:09.992077 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.993210 kubelet[2689]: E0120 01:43:09.993046 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.993210 kubelet[2689]: W0120 01:43:09.993066 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.995322 kubelet[2689]: E0120 01:43:09.994738 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.995322 kubelet[2689]: W0120 01:43:09.994758 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.995322 kubelet[2689]: E0120 01:43:09.994774 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.995758 kubelet[2689]: E0120 01:43:09.995727 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.996607 kubelet[2689]: W0120 01:43:09.996583 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.996760 kubelet[2689]: E0120 01:43:09.996739 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.998928 kubelet[2689]: E0120 01:43:09.995921 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.998928 kubelet[2689]: E0120 01:43:09.997671 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.998928 kubelet[2689]: W0120 01:43:09.997687 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.998928 kubelet[2689]: E0120 01:43:09.997703 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:09.999207 kubelet[2689]: E0120 01:43:09.999135 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:09.999207 kubelet[2689]: W0120 01:43:09.999158 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:09.999207 kubelet[2689]: E0120 01:43:09.999174 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.000140 kubelet[2689]: E0120 01:43:10.000066 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.000140 kubelet[2689]: W0120 01:43:10.000086 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.000140 kubelet[2689]: E0120 01:43:10.000103 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.868238 kubelet[2689]: I0120 01:43:10.868015 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:43:10.917762 kubelet[2689]: E0120 01:43:10.917702 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.917762 kubelet[2689]: W0120 01:43:10.917744 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.917762 kubelet[2689]: E0120 01:43:10.917775 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.918478 kubelet[2689]: E0120 01:43:10.918436 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.918478 kubelet[2689]: W0120 01:43:10.918458 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.918478 kubelet[2689]: E0120 01:43:10.918475 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.919706 kubelet[2689]: E0120 01:43:10.919680 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.919706 kubelet[2689]: W0120 01:43:10.919703 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.919916 kubelet[2689]: E0120 01:43:10.919724 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.920513 kubelet[2689]: E0120 01:43:10.920489 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.921340 kubelet[2689]: W0120 01:43:10.921303 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.921340 kubelet[2689]: E0120 01:43:10.921338 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.922277 kubelet[2689]: E0120 01:43:10.922224 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.922277 kubelet[2689]: W0120 01:43:10.922249 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.922277 kubelet[2689]: E0120 01:43:10.922274 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.922660 kubelet[2689]: E0120 01:43:10.922578 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.922660 kubelet[2689]: W0120 01:43:10.922593 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.922660 kubelet[2689]: E0120 01:43:10.922608 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.924166 kubelet[2689]: E0120 01:43:10.923755 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.924166 kubelet[2689]: W0120 01:43:10.923776 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.924166 kubelet[2689]: E0120 01:43:10.923793 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.924166 kubelet[2689]: E0120 01:43:10.924097 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.924166 kubelet[2689]: W0120 01:43:10.924111 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.924166 kubelet[2689]: E0120 01:43:10.924127 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.925000 kubelet[2689]: E0120 01:43:10.924486 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.925000 kubelet[2689]: W0120 01:43:10.924502 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.925000 kubelet[2689]: E0120 01:43:10.924517 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.925000 kubelet[2689]: E0120 01:43:10.924916 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.925000 kubelet[2689]: W0120 01:43:10.924939 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.925000 kubelet[2689]: E0120 01:43:10.924955 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.927077 kubelet[2689]: E0120 01:43:10.927051 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.927077 kubelet[2689]: W0120 01:43:10.927072 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.927247 kubelet[2689]: E0120 01:43:10.927089 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.927247 kubelet[2689]: E0120 01:43:10.927499 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.927247 kubelet[2689]: W0120 01:43:10.927515 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.927247 kubelet[2689]: E0120 01:43:10.927530 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.928618 kubelet[2689]: E0120 01:43:10.928393 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.928618 kubelet[2689]: W0120 01:43:10.928519 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.928618 kubelet[2689]: E0120 01:43:10.928540 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.929398 kubelet[2689]: E0120 01:43:10.929220 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.929398 kubelet[2689]: W0120 01:43:10.929253 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.929398 kubelet[2689]: E0120 01:43:10.929278 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.930858 kubelet[2689]: E0120 01:43:10.929759 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.930858 kubelet[2689]: W0120 01:43:10.929869 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.930858 kubelet[2689]: E0120 01:43:10.929890 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.997786 kubelet[2689]: E0120 01:43:10.997735 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.998164 kubelet[2689]: W0120 01:43:10.998135 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.998310 kubelet[2689]: E0120 01:43:10.998272 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.998885 kubelet[2689]: E0120 01:43:10.998863 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.999022 kubelet[2689]: W0120 01:43:10.999000 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.999149 kubelet[2689]: E0120 01:43:10.999129 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:10.999608 kubelet[2689]: E0120 01:43:10.999588 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:10.999727 kubelet[2689]: W0120 01:43:10.999707 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:10.999872 kubelet[2689]: E0120 01:43:10.999825 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.000401 kubelet[2689]: E0120 01:43:11.000364 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.000500 kubelet[2689]: W0120 01:43:11.000402 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.000500 kubelet[2689]: E0120 01:43:11.000442 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.000820 kubelet[2689]: E0120 01:43:11.000797 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.000820 kubelet[2689]: W0120 01:43:11.000818 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.001063 kubelet[2689]: E0120 01:43:11.000930 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.001139 kubelet[2689]: E0120 01:43:11.001125 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.001328 kubelet[2689]: W0120 01:43:11.001139 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.001328 kubelet[2689]: E0120 01:43:11.001193 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.001540 kubelet[2689]: E0120 01:43:11.001443 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.001540 kubelet[2689]: W0120 01:43:11.001458 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.001540 kubelet[2689]: E0120 01:43:11.001493 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.001765 kubelet[2689]: E0120 01:43:11.001743 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.001765 kubelet[2689]: W0120 01:43:11.001764 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.001979 kubelet[2689]: E0120 01:43:11.001788 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.002126 kubelet[2689]: E0120 01:43:11.002106 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.002126 kubelet[2689]: W0120 01:43:11.002124 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.002283 kubelet[2689]: E0120 01:43:11.002146 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.002477 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.004499 kubelet[2689]: W0120 01:43:11.002498 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.002823 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.004499 kubelet[2689]: W0120 01:43:11.002878 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.003133 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.004499 kubelet[2689]: W0120 01:43:11.003147 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.003161 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.003476 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.004499 kubelet[2689]: W0120 01:43:11.003490 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.003506 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.004499 kubelet[2689]: E0120 01:43:11.004003 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.005859 kubelet[2689]: W0120 01:43:11.004019 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.004035 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.004073 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.004646 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.005859 kubelet[2689]: W0120 01:43:11.004661 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.004686 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.004710 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.005019 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.005859 kubelet[2689]: W0120 01:43:11.005035 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.005859 kubelet[2689]: E0120 01:43:11.005050 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.006451 kubelet[2689]: E0120 01:43:11.005379 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.006451 kubelet[2689]: W0120 01:43:11.005394 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.006451 kubelet[2689]: E0120 01:43:11.005409 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.006663 kubelet[2689]: E0120 01:43:11.006638 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 01:43:11.006663 kubelet[2689]: W0120 01:43:11.006660 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 01:43:11.006807 kubelet[2689]: E0120 01:43:11.006677 2689 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 01:43:11.012697 containerd[1496]: time="2026-01-20T01:43:11.012600848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:11.014811 containerd[1496]: time="2026-01-20T01:43:11.014741944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 20 01:43:11.015995 containerd[1496]: time="2026-01-20T01:43:11.015933731Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:11.019393 containerd[1496]: time="2026-01-20T01:43:11.019253343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:11.021071 containerd[1496]: time="2026-01-20T01:43:11.021016484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.585827884s" Jan 20 01:43:11.021163 containerd[1496]: time="2026-01-20T01:43:11.021072603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 20 01:43:11.024699 containerd[1496]: time="2026-01-20T01:43:11.024526569Z" level=info msg="CreateContainer within sandbox \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 01:43:11.044775 containerd[1496]: time="2026-01-20T01:43:11.044595316Z" level=info msg="CreateContainer within sandbox \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470\"" Jan 20 01:43:11.048881 containerd[1496]: time="2026-01-20T01:43:11.047981407Z" level=info msg="StartContainer for \"41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470\"" Jan 20 01:43:11.125093 systemd[1]: Started cri-containerd-41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470.scope - libcontainer container 41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470. Jan 20 01:43:11.180187 containerd[1496]: time="2026-01-20T01:43:11.180107827Z" level=info msg="StartContainer for \"41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470\" returns successfully" Jan 20 01:43:11.203516 systemd[1]: cri-containerd-41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470.scope: Deactivated successfully. Jan 20 01:43:11.243674 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470-rootfs.mount: Deactivated successfully. Jan 20 01:43:11.270616 containerd[1496]: time="2026-01-20T01:43:11.256981749Z" level=info msg="shim disconnected" id=41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470 namespace=k8s.io Jan 20 01:43:11.270616 containerd[1496]: time="2026-01-20T01:43:11.270361318Z" level=warning msg="cleaning up after shim disconnected" id=41f5314f75252b09c23016e6d6c15e0514b95c031a8db5bda75c16bda6c83470 namespace=k8s.io Jan 20 01:43:11.270616 containerd[1496]: time="2026-01-20T01:43:11.270387639Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:43:11.673060 kubelet[2689]: E0120 01:43:11.672973 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:11.877328 containerd[1496]: time="2026-01-20T01:43:11.877090770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 01:43:13.674108 kubelet[2689]: E0120 01:43:13.674014 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:15.673175 kubelet[2689]: E0120 01:43:15.672626 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:17.013299 containerd[1496]: time="2026-01-20T01:43:17.011676441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:17.013299 containerd[1496]: time="2026-01-20T01:43:17.012996302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 20 01:43:17.013299 containerd[1496]: time="2026-01-20T01:43:17.013220020Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:17.017874 containerd[1496]: time="2026-01-20T01:43:17.016227893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:17.018064 containerd[1496]: time="2026-01-20T01:43:17.018028220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.140836919s" Jan 20 01:43:17.018218 containerd[1496]: time="2026-01-20T01:43:17.018189206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 20 01:43:17.023864 containerd[1496]: time="2026-01-20T01:43:17.023506546Z" level=info msg="CreateContainer within sandbox \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 01:43:17.076921 containerd[1496]: time="2026-01-20T01:43:17.076782510Z" level=info msg="CreateContainer within sandbox \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908\"" Jan 20 01:43:17.079896 containerd[1496]: time="2026-01-20T01:43:17.078065179Z" level=info msg="StartContainer for \"987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908\"" Jan 20 01:43:17.155546 systemd[1]: Started cri-containerd-987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908.scope - libcontainer container 987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908. Jan 20 01:43:17.215162 containerd[1496]: time="2026-01-20T01:43:17.215082821Z" level=info msg="StartContainer for \"987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908\" returns successfully" Jan 20 01:43:17.673910 kubelet[2689]: E0120 01:43:17.672150 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:18.178954 systemd[1]: cri-containerd-987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908.scope: Deactivated successfully. Jan 20 01:43:18.382793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908-rootfs.mount: Deactivated successfully. Jan 20 01:43:18.429404 containerd[1496]: time="2026-01-20T01:43:18.429025506Z" level=info msg="shim disconnected" id=987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908 namespace=k8s.io Jan 20 01:43:18.429404 containerd[1496]: time="2026-01-20T01:43:18.429203855Z" level=warning msg="cleaning up after shim disconnected" id=987fe742b5635789f01114312f9deb74faa7c3031125b9a90c778fcf188d7908 namespace=k8s.io Jan 20 01:43:18.429404 containerd[1496]: time="2026-01-20T01:43:18.429228467Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 20 01:43:18.443344 kubelet[2689]: I0120 01:43:18.443292 2689 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 01:43:18.469017 containerd[1496]: time="2026-01-20T01:43:18.467692428Z" level=warning msg="cleanup warnings time=\"2026-01-20T01:43:18Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 20 01:43:18.589557 systemd[1]: Created slice kubepods-besteffort-poddad1d3fa_2f8c_4259_b917_059c3b3e6572.slice - libcontainer container kubepods-besteffort-poddad1d3fa_2f8c_4259_b917_059c3b3e6572.slice. Jan 20 01:43:18.611691 systemd[1]: Created slice kubepods-burstable-pod896c437d_0a8d_496f_a420_742c93e0d6a2.slice - libcontainer container kubepods-burstable-pod896c437d_0a8d_496f_a420_742c93e0d6a2.slice. Jan 20 01:43:18.629886 systemd[1]: Created slice kubepods-besteffort-pod63686bdb_630e_4c31_bb10_61a7b178bd09.slice - libcontainer container kubepods-besteffort-pod63686bdb_630e_4c31_bb10_61a7b178bd09.slice. Jan 20 01:43:18.651122 systemd[1]: Created slice kubepods-burstable-pod94aa1e8b_d364_40d2_9c05_39e890317a94.slice - libcontainer container kubepods-burstable-pod94aa1e8b_d364_40d2_9c05_39e890317a94.slice. Jan 20 01:43:18.666646 systemd[1]: Created slice kubepods-besteffort-podeedef20c_6169_4097_90af_4b5ed35e4c70.slice - libcontainer container kubepods-besteffort-podeedef20c_6169_4097_90af_4b5ed35e4c70.slice. Jan 20 01:43:18.672085 kubelet[2689]: I0120 01:43:18.671949 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/573ad695-5762-4b18-9450-3954cd6448a6-calico-apiserver-certs\") pod \"calico-apiserver-799b8f498b-fhvkc\" (UID: \"573ad695-5762-4b18-9450-3954cd6448a6\") " pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" Jan 20 01:43:18.672085 kubelet[2689]: I0120 01:43:18.672008 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jd9x\" (UniqueName: \"kubernetes.io/projected/63686bdb-630e-4c31-bb10-61a7b178bd09-kube-api-access-5jd9x\") pod \"calico-apiserver-799b8f498b-5jdcb\" (UID: \"63686bdb-630e-4c31-bb10-61a7b178bd09\") " pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" Jan 20 01:43:18.672085 kubelet[2689]: I0120 01:43:18.672042 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eedef20c-6169-4097-90af-4b5ed35e4c70-tigera-ca-bundle\") pod \"calico-kube-controllers-849c94fcc7-89lqr\" (UID: \"eedef20c-6169-4097-90af-4b5ed35e4c70\") " pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" Jan 20 01:43:18.672085 kubelet[2689]: I0120 01:43:18.672073 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zx2f\" (UniqueName: \"kubernetes.io/projected/7f445973-85d0-4221-8af9-3dc0c3aa4878-kube-api-access-2zx2f\") pod \"goldmane-666569f655-kt727\" (UID: \"7f445973-85d0-4221-8af9-3dc0c3aa4878\") " pod="calico-system/goldmane-666569f655-kt727" Jan 20 01:43:18.672493 kubelet[2689]: I0120 01:43:18.672107 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbj7\" (UniqueName: \"kubernetes.io/projected/eedef20c-6169-4097-90af-4b5ed35e4c70-kube-api-access-ttbj7\") pod \"calico-kube-controllers-849c94fcc7-89lqr\" (UID: \"eedef20c-6169-4097-90af-4b5ed35e4c70\") " pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" Jan 20 01:43:18.672493 kubelet[2689]: I0120 01:43:18.672163 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f445973-85d0-4221-8af9-3dc0c3aa4878-config\") pod \"goldmane-666569f655-kt727\" (UID: \"7f445973-85d0-4221-8af9-3dc0c3aa4878\") " pod="calico-system/goldmane-666569f655-kt727" Jan 20 01:43:18.672493 kubelet[2689]: I0120 01:43:18.672197 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h49ps\" (UniqueName: \"kubernetes.io/projected/5bb26b29-89e1-4055-a3dd-e9f6156c0d75-kube-api-access-h49ps\") pod \"calico-apiserver-66bfff8c98-mt7kn\" (UID: \"5bb26b29-89e1-4055-a3dd-e9f6156c0d75\") " pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" Jan 20 01:43:18.672493 kubelet[2689]: I0120 01:43:18.672226 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrgln\" (UniqueName: \"kubernetes.io/projected/896c437d-0a8d-496f-a420-742c93e0d6a2-kube-api-access-nrgln\") pod \"coredns-668d6bf9bc-jd9dv\" (UID: \"896c437d-0a8d-496f-a420-742c93e0d6a2\") " pod="kube-system/coredns-668d6bf9bc-jd9dv" Jan 20 01:43:18.672493 kubelet[2689]: I0120 01:43:18.672255 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t4jg\" (UniqueName: \"kubernetes.io/projected/dad1d3fa-2f8c-4259-b917-059c3b3e6572-kube-api-access-8t4jg\") pod \"whisker-596775c78f-n9sm2\" (UID: \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\") " pod="calico-system/whisker-596775c78f-n9sm2" Jan 20 01:43:18.672779 kubelet[2689]: I0120 01:43:18.672286 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/63686bdb-630e-4c31-bb10-61a7b178bd09-calico-apiserver-certs\") pod \"calico-apiserver-799b8f498b-5jdcb\" (UID: \"63686bdb-630e-4c31-bb10-61a7b178bd09\") " pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" Jan 20 01:43:18.672779 kubelet[2689]: I0120 01:43:18.672317 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-backend-key-pair\") pod \"whisker-596775c78f-n9sm2\" (UID: \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\") " pod="calico-system/whisker-596775c78f-n9sm2" Jan 20 01:43:18.672779 kubelet[2689]: I0120 01:43:18.672347 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94aa1e8b-d364-40d2-9c05-39e890317a94-config-volume\") pod \"coredns-668d6bf9bc-gjtls\" (UID: \"94aa1e8b-d364-40d2-9c05-39e890317a94\") " pod="kube-system/coredns-668d6bf9bc-gjtls" Jan 20 01:43:18.672779 kubelet[2689]: I0120 01:43:18.672424 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5bb26b29-89e1-4055-a3dd-e9f6156c0d75-calico-apiserver-certs\") pod \"calico-apiserver-66bfff8c98-mt7kn\" (UID: \"5bb26b29-89e1-4055-a3dd-e9f6156c0d75\") " pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" Jan 20 01:43:18.672779 kubelet[2689]: I0120 01:43:18.672466 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/896c437d-0a8d-496f-a420-742c93e0d6a2-config-volume\") pod \"coredns-668d6bf9bc-jd9dv\" (UID: \"896c437d-0a8d-496f-a420-742c93e0d6a2\") " pod="kube-system/coredns-668d6bf9bc-jd9dv" Jan 20 01:43:18.673086 kubelet[2689]: I0120 01:43:18.672503 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-ca-bundle\") pod \"whisker-596775c78f-n9sm2\" (UID: \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\") " pod="calico-system/whisker-596775c78f-n9sm2" Jan 20 01:43:18.673086 kubelet[2689]: I0120 01:43:18.672548 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f445973-85d0-4221-8af9-3dc0c3aa4878-goldmane-ca-bundle\") pod \"goldmane-666569f655-kt727\" (UID: \"7f445973-85d0-4221-8af9-3dc0c3aa4878\") " pod="calico-system/goldmane-666569f655-kt727" Jan 20 01:43:18.673086 kubelet[2689]: I0120 01:43:18.672576 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm48s\" (UniqueName: \"kubernetes.io/projected/94aa1e8b-d364-40d2-9c05-39e890317a94-kube-api-access-tm48s\") pod \"coredns-668d6bf9bc-gjtls\" (UID: \"94aa1e8b-d364-40d2-9c05-39e890317a94\") " pod="kube-system/coredns-668d6bf9bc-gjtls" Jan 20 01:43:18.673086 kubelet[2689]: I0120 01:43:18.672629 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7f445973-85d0-4221-8af9-3dc0c3aa4878-goldmane-key-pair\") pod \"goldmane-666569f655-kt727\" (UID: \"7f445973-85d0-4221-8af9-3dc0c3aa4878\") " pod="calico-system/goldmane-666569f655-kt727" Jan 20 01:43:18.673086 kubelet[2689]: I0120 01:43:18.672683 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44szk\" (UniqueName: \"kubernetes.io/projected/573ad695-5762-4b18-9450-3954cd6448a6-kube-api-access-44szk\") pod \"calico-apiserver-799b8f498b-fhvkc\" (UID: \"573ad695-5762-4b18-9450-3954cd6448a6\") " pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" Jan 20 01:43:18.679331 systemd[1]: Created slice kubepods-besteffort-pod5bb26b29_89e1_4055_a3dd_e9f6156c0d75.slice - libcontainer container kubepods-besteffort-pod5bb26b29_89e1_4055_a3dd_e9f6156c0d75.slice. Jan 20 01:43:18.695609 systemd[1]: Created slice kubepods-besteffort-pod573ad695_5762_4b18_9450_3954cd6448a6.slice - libcontainer container kubepods-besteffort-pod573ad695_5762_4b18_9450_3954cd6448a6.slice. Jan 20 01:43:18.709875 systemd[1]: Created slice kubepods-besteffort-pod7f445973_85d0_4221_8af9_3dc0c3aa4878.slice - libcontainer container kubepods-besteffort-pod7f445973_85d0_4221_8af9_3dc0c3aa4878.slice. Jan 20 01:43:18.908777 containerd[1496]: time="2026-01-20T01:43:18.908484681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596775c78f-n9sm2,Uid:dad1d3fa-2f8c-4259-b917-059c3b3e6572,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:18.912433 containerd[1496]: time="2026-01-20T01:43:18.909575828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 01:43:18.924963 containerd[1496]: time="2026-01-20T01:43:18.924925020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jd9dv,Uid:896c437d-0a8d-496f-a420-742c93e0d6a2,Namespace:kube-system,Attempt:0,}" Jan 20 01:43:18.948628 containerd[1496]: time="2026-01-20T01:43:18.948344067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-5jdcb,Uid:63686bdb-630e-4c31-bb10-61a7b178bd09,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:43:18.960647 containerd[1496]: time="2026-01-20T01:43:18.960360766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gjtls,Uid:94aa1e8b-d364-40d2-9c05-39e890317a94,Namespace:kube-system,Attempt:0,}" Jan 20 01:43:18.974036 containerd[1496]: time="2026-01-20T01:43:18.973973533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849c94fcc7-89lqr,Uid:eedef20c-6169-4097-90af-4b5ed35e4c70,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:18.993474 containerd[1496]: time="2026-01-20T01:43:18.993384358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bfff8c98-mt7kn,Uid:5bb26b29-89e1-4055-a3dd-e9f6156c0d75,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:43:19.005161 containerd[1496]: time="2026-01-20T01:43:19.004846830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-fhvkc,Uid:573ad695-5762-4b18-9450-3954cd6448a6,Namespace:calico-apiserver,Attempt:0,}" Jan 20 01:43:19.025498 containerd[1496]: time="2026-01-20T01:43:19.025439408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kt727,Uid:7f445973-85d0-4221-8af9-3dc0c3aa4878,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:19.503089 containerd[1496]: time="2026-01-20T01:43:19.503022119Z" level=error msg="Failed to destroy network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.508535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e-shm.mount: Deactivated successfully. Jan 20 01:43:19.518609 containerd[1496]: time="2026-01-20T01:43:19.518553402Z" level=error msg="Failed to destroy network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.524654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42-shm.mount: Deactivated successfully. Jan 20 01:43:19.525355 containerd[1496]: time="2026-01-20T01:43:19.525314420Z" level=error msg="encountered an error cleaning up failed sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.525596 containerd[1496]: time="2026-01-20T01:43:19.525554719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-596775c78f-n9sm2,Uid:dad1d3fa-2f8c-4259-b917-059c3b3e6572,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.531021 containerd[1496]: time="2026-01-20T01:43:19.530976772Z" level=error msg="encountered an error cleaning up failed sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.531234 containerd[1496]: time="2026-01-20T01:43:19.531195753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gjtls,Uid:94aa1e8b-d364-40d2-9c05-39e890317a94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.559996 kubelet[2689]: E0120 01:43:19.545305 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.561101 kubelet[2689]: E0120 01:43:19.545405 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.578967 kubelet[2689]: E0120 01:43:19.578898 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gjtls" Jan 20 01:43:19.579183 kubelet[2689]: E0120 01:43:19.579002 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gjtls" Jan 20 01:43:19.579321 kubelet[2689]: E0120 01:43:19.579259 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gjtls_kube-system(94aa1e8b-d364-40d2-9c05-39e890317a94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gjtls_kube-system(94aa1e8b-d364-40d2-9c05-39e890317a94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gjtls" podUID="94aa1e8b-d364-40d2-9c05-39e890317a94" Jan 20 01:43:19.583543 kubelet[2689]: E0120 01:43:19.578682 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596775c78f-n9sm2" Jan 20 01:43:19.583660 kubelet[2689]: E0120 01:43:19.583579 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-596775c78f-n9sm2" Jan 20 01:43:19.583660 kubelet[2689]: E0120 01:43:19.583630 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-596775c78f-n9sm2_calico-system(dad1d3fa-2f8c-4259-b917-059c3b3e6572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-596775c78f-n9sm2_calico-system(dad1d3fa-2f8c-4259-b917-059c3b3e6572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596775c78f-n9sm2" podUID="dad1d3fa-2f8c-4259-b917-059c3b3e6572" Jan 20 01:43:19.604009 containerd[1496]: time="2026-01-20T01:43:19.603822938Z" level=error msg="Failed to destroy network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.608904 containerd[1496]: time="2026-01-20T01:43:19.604556005Z" level=error msg="encountered an error cleaning up failed sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.608904 containerd[1496]: time="2026-01-20T01:43:19.604638334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jd9dv,Uid:896c437d-0a8d-496f-a420-742c93e0d6a2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.609142 kubelet[2689]: E0120 01:43:19.605016 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.609142 kubelet[2689]: E0120 01:43:19.605133 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jd9dv" Jan 20 01:43:19.609142 kubelet[2689]: E0120 01:43:19.605173 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jd9dv" Jan 20 01:43:19.609328 kubelet[2689]: E0120 01:43:19.605243 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jd9dv_kube-system(896c437d-0a8d-496f-a420-742c93e0d6a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jd9dv_kube-system(896c437d-0a8d-496f-a420-742c93e0d6a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jd9dv" podUID="896c437d-0a8d-496f-a420-742c93e0d6a2" Jan 20 01:43:19.610291 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a-shm.mount: Deactivated successfully. Jan 20 01:43:19.622077 containerd[1496]: time="2026-01-20T01:43:19.622005859Z" level=error msg="Failed to destroy network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.624906 containerd[1496]: time="2026-01-20T01:43:19.623847784Z" level=error msg="encountered an error cleaning up failed sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.625245 containerd[1496]: time="2026-01-20T01:43:19.625076781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bfff8c98-mt7kn,Uid:5bb26b29-89e1-4055-a3dd-e9f6156c0d75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.629285 kubelet[2689]: E0120 01:43:19.625615 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.629285 kubelet[2689]: E0120 01:43:19.625736 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" Jan 20 01:43:19.629285 kubelet[2689]: E0120 01:43:19.625769 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" Jan 20 01:43:19.627568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671-shm.mount: Deactivated successfully. Jan 20 01:43:19.631057 kubelet[2689]: E0120 01:43:19.625878 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66bfff8c98-mt7kn_calico-apiserver(5bb26b29-89e1-4055-a3dd-e9f6156c0d75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66bfff8c98-mt7kn_calico-apiserver(5bb26b29-89e1-4055-a3dd-e9f6156c0d75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:43:19.652066 containerd[1496]: time="2026-01-20T01:43:19.651981194Z" level=error msg="Failed to destroy network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.653108 containerd[1496]: time="2026-01-20T01:43:19.653071166Z" level=error msg="encountered an error cleaning up failed sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.653314 containerd[1496]: time="2026-01-20T01:43:19.653276827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-fhvkc,Uid:573ad695-5762-4b18-9450-3954cd6448a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.654943 kubelet[2689]: E0120 01:43:19.654440 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.654943 kubelet[2689]: E0120 01:43:19.654583 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" Jan 20 01:43:19.654943 kubelet[2689]: E0120 01:43:19.654620 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" Jan 20 01:43:19.655196 kubelet[2689]: E0120 01:43:19.654721 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799b8f498b-fhvkc_calico-apiserver(573ad695-5762-4b18-9450-3954cd6448a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799b8f498b-fhvkc_calico-apiserver(573ad695-5762-4b18-9450-3954cd6448a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:43:19.665152 containerd[1496]: time="2026-01-20T01:43:19.665038285Z" level=error msg="Failed to destroy network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.667442 containerd[1496]: time="2026-01-20T01:43:19.667382017Z" level=error msg="encountered an error cleaning up failed sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.667575 containerd[1496]: time="2026-01-20T01:43:19.667512258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-5jdcb,Uid:63686bdb-630e-4c31-bb10-61a7b178bd09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.668310 kubelet[2689]: E0120 01:43:19.668034 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.668429 kubelet[2689]: E0120 01:43:19.668362 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" Jan 20 01:43:19.669260 kubelet[2689]: E0120 01:43:19.669140 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" Jan 20 01:43:19.669614 kubelet[2689]: E0120 01:43:19.669344 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799b8f498b-5jdcb_calico-apiserver(63686bdb-630e-4c31-bb10-61a7b178bd09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799b8f498b-5jdcb_calico-apiserver(63686bdb-630e-4c31-bb10-61a7b178bd09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:43:19.672784 containerd[1496]: time="2026-01-20T01:43:19.672359784Z" level=error msg="Failed to destroy network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.676725 containerd[1496]: time="2026-01-20T01:43:19.676072018Z" level=error msg="encountered an error cleaning up failed sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.677006 containerd[1496]: time="2026-01-20T01:43:19.676658578Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849c94fcc7-89lqr,Uid:eedef20c-6169-4097-90af-4b5ed35e4c70,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.679062 containerd[1496]: time="2026-01-20T01:43:19.679023525Z" level=error msg="Failed to destroy network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.680013 containerd[1496]: time="2026-01-20T01:43:19.679797310Z" level=error msg="encountered an error cleaning up failed sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.680013 containerd[1496]: time="2026-01-20T01:43:19.679896796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kt727,Uid:7f445973-85d0-4221-8af9-3dc0c3aa4878,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.681010 kubelet[2689]: E0120 01:43:19.680951 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.681499 kubelet[2689]: E0120 01:43:19.681075 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kt727" Jan 20 01:43:19.681499 kubelet[2689]: E0120 01:43:19.681155 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kt727" Jan 20 01:43:19.681499 kubelet[2689]: E0120 01:43:19.681244 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kt727_calico-system(7f445973-85d0-4221-8af9-3dc0c3aa4878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kt727_calico-system(7f445973-85d0-4221-8af9-3dc0c3aa4878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:43:19.681946 kubelet[2689]: E0120 01:43:19.681901 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.682065 kubelet[2689]: E0120 01:43:19.681999 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" Jan 20 01:43:19.682150 kubelet[2689]: E0120 01:43:19.682073 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" Jan 20 01:43:19.684184 kubelet[2689]: E0120 01:43:19.682176 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-849c94fcc7-89lqr_calico-system(eedef20c-6169-4097-90af-4b5ed35e4c70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-849c94fcc7-89lqr_calico-system(eedef20c-6169-4097-90af-4b5ed35e4c70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:43:19.695537 systemd[1]: Created slice kubepods-besteffort-podc6594f9f_80a7_4dbf_a4b4_1d2817fc3bbd.slice - libcontainer container kubepods-besteffort-podc6594f9f_80a7_4dbf_a4b4_1d2817fc3bbd.slice. Jan 20 01:43:19.700713 containerd[1496]: time="2026-01-20T01:43:19.699994282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w59jj,Uid:c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:19.799510 containerd[1496]: time="2026-01-20T01:43:19.799344685Z" level=error msg="Failed to destroy network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.800567 containerd[1496]: time="2026-01-20T01:43:19.800345064Z" level=error msg="encountered an error cleaning up failed sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.800567 containerd[1496]: time="2026-01-20T01:43:19.800449862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w59jj,Uid:c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.801220 kubelet[2689]: E0120 01:43:19.801142 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:19.801305 kubelet[2689]: E0120 01:43:19.801248 2689 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:19.801305 kubelet[2689]: E0120 01:43:19.801294 2689 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w59jj" Jan 20 01:43:19.801825 kubelet[2689]: E0120 01:43:19.801383 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:19.910925 kubelet[2689]: I0120 01:43:19.910272 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:19.913269 kubelet[2689]: I0120 01:43:19.912768 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:19.932008 kubelet[2689]: I0120 01:43:19.931966 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:19.935878 kubelet[2689]: I0120 01:43:19.935165 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:19.939082 kubelet[2689]: I0120 01:43:19.939041 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:19.942183 kubelet[2689]: I0120 01:43:19.941649 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:19.944300 kubelet[2689]: I0120 01:43:19.943955 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:19.950761 kubelet[2689]: I0120 01:43:19.950144 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:19.954651 kubelet[2689]: I0120 01:43:19.954585 2689 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:19.974727 containerd[1496]: time="2026-01-20T01:43:19.974599540Z" level=info msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" Jan 20 01:43:19.975219 containerd[1496]: time="2026-01-20T01:43:19.975182740Z" level=info msg="StopPodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\"" Jan 20 01:43:19.976765 containerd[1496]: time="2026-01-20T01:43:19.976632583Z" level=info msg="Ensure that sandbox 9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70 in task-service has been cleanup successfully" Jan 20 01:43:19.976765 containerd[1496]: time="2026-01-20T01:43:19.976699443Z" level=info msg="Ensure that sandbox 846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e in task-service has been cleanup successfully" Jan 20 01:43:19.979865 containerd[1496]: time="2026-01-20T01:43:19.979689463Z" level=info msg="StopPodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\"" Jan 20 01:43:19.980314 containerd[1496]: time="2026-01-20T01:43:19.980258973Z" level=info msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" Jan 20 01:43:19.981708 containerd[1496]: time="2026-01-20T01:43:19.981339690Z" level=info msg="Ensure that sandbox 50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb in task-service has been cleanup successfully" Jan 20 01:43:19.986449 containerd[1496]: time="2026-01-20T01:43:19.986410104Z" level=info msg="Ensure that sandbox 0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06 in task-service has been cleanup successfully" Jan 20 01:43:19.986625 containerd[1496]: time="2026-01-20T01:43:19.986594916Z" level=info msg="StopPodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\"" Jan 20 01:43:19.986934 containerd[1496]: time="2026-01-20T01:43:19.986903415Z" level=info msg="Ensure that sandbox 38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27 in task-service has been cleanup successfully" Jan 20 01:43:19.987426 containerd[1496]: time="2026-01-20T01:43:19.987397689Z" level=info msg="StopPodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\"" Jan 20 01:43:19.987728 containerd[1496]: time="2026-01-20T01:43:19.987697680Z" level=info msg="Ensure that sandbox a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5 in task-service has been cleanup successfully" Jan 20 01:43:19.988437 containerd[1496]: time="2026-01-20T01:43:19.987078421Z" level=info msg="StopPodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\"" Jan 20 01:43:19.988437 containerd[1496]: time="2026-01-20T01:43:19.988385466Z" level=info msg="Ensure that sandbox e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a in task-service has been cleanup successfully" Jan 20 01:43:19.989794 containerd[1496]: time="2026-01-20T01:43:19.987144449Z" level=info msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" Jan 20 01:43:19.992716 containerd[1496]: time="2026-01-20T01:43:19.991210805Z" level=info msg="Ensure that sandbox a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42 in task-service has been cleanup successfully" Jan 20 01:43:20.002888 containerd[1496]: time="2026-01-20T01:43:19.987915681Z" level=info msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" Jan 20 01:43:20.003362 containerd[1496]: time="2026-01-20T01:43:20.003311491Z" level=info msg="Ensure that sandbox 8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671 in task-service has been cleanup successfully" Jan 20 01:43:20.158646 containerd[1496]: time="2026-01-20T01:43:20.158449834Z" level=error msg="StopPodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" failed" error="failed to destroy network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.160518 kubelet[2689]: E0120 01:43:20.159855 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:20.190212 kubelet[2689]: E0120 01:43:20.173895 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5"} Jan 20 01:43:20.190212 kubelet[2689]: E0120 01:43:20.189985 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"573ad695-5762-4b18-9450-3954cd6448a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.190212 kubelet[2689]: E0120 01:43:20.190051 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"573ad695-5762-4b18-9450-3954cd6448a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:43:20.197398 containerd[1496]: time="2026-01-20T01:43:20.197273548Z" level=error msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" failed" error="failed to destroy network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.198402 kubelet[2689]: E0120 01:43:20.198067 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:20.198402 kubelet[2689]: E0120 01:43:20.198164 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e"} Jan 20 01:43:20.198402 kubelet[2689]: E0120 01:43:20.198224 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94aa1e8b-d364-40d2-9c05-39e890317a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.198402 kubelet[2689]: E0120 01:43:20.198259 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94aa1e8b-d364-40d2-9c05-39e890317a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gjtls" podUID="94aa1e8b-d364-40d2-9c05-39e890317a94" Jan 20 01:43:20.212571 containerd[1496]: time="2026-01-20T01:43:20.212396649Z" level=error msg="StopPodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" failed" error="failed to destroy network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.213436 kubelet[2689]: E0120 01:43:20.212882 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:20.213436 kubelet[2689]: E0120 01:43:20.213037 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70"} Jan 20 01:43:20.213436 kubelet[2689]: E0120 01:43:20.213122 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7f445973-85d0-4221-8af9-3dc0c3aa4878\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.213436 kubelet[2689]: E0120 01:43:20.213179 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7f445973-85d0-4221-8af9-3dc0c3aa4878\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:43:20.218265 containerd[1496]: time="2026-01-20T01:43:20.217742943Z" level=error msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" failed" error="failed to destroy network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.218597 kubelet[2689]: E0120 01:43:20.218505 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:20.218800 kubelet[2689]: E0120 01:43:20.218596 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42"} Jan 20 01:43:20.218800 kubelet[2689]: E0120 01:43:20.218673 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.218800 kubelet[2689]: E0120 01:43:20.218710 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596775c78f-n9sm2" podUID="dad1d3fa-2f8c-4259-b917-059c3b3e6572" Jan 20 01:43:20.230943 containerd[1496]: time="2026-01-20T01:43:20.230029809Z" level=error msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" failed" error="failed to destroy network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.231076 kubelet[2689]: E0120 01:43:20.230372 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:20.231076 kubelet[2689]: E0120 01:43:20.230433 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06"} Jan 20 01:43:20.231076 kubelet[2689]: E0120 01:43:20.230489 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63686bdb-630e-4c31-bb10-61a7b178bd09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.231076 kubelet[2689]: E0120 01:43:20.230529 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63686bdb-630e-4c31-bb10-61a7b178bd09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:43:20.255238 containerd[1496]: time="2026-01-20T01:43:20.255023354Z" level=error msg="StopPodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" failed" error="failed to destroy network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.255635 kubelet[2689]: E0120 01:43:20.255425 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:20.255635 kubelet[2689]: E0120 01:43:20.255573 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27"} Jan 20 01:43:20.255802 kubelet[2689]: E0120 01:43:20.255632 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.255802 kubelet[2689]: E0120 01:43:20.255675 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:20.260237 containerd[1496]: time="2026-01-20T01:43:20.260191344Z" level=error msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" failed" error="failed to destroy network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.260628 kubelet[2689]: E0120 01:43:20.260573 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:20.260628 kubelet[2689]: E0120 01:43:20.260637 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671"} Jan 20 01:43:20.260963 kubelet[2689]: E0120 01:43:20.260674 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5bb26b29-89e1-4055-a3dd-e9f6156c0d75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.260963 kubelet[2689]: E0120 01:43:20.260738 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5bb26b29-89e1-4055-a3dd-e9f6156c0d75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:43:20.261872 containerd[1496]: time="2026-01-20T01:43:20.261491967Z" level=error msg="StopPodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" failed" error="failed to destroy network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.262251 kubelet[2689]: E0120 01:43:20.262006 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:20.262696 kubelet[2689]: E0120 01:43:20.262211 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a"} Jan 20 01:43:20.262696 kubelet[2689]: E0120 01:43:20.262610 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"896c437d-0a8d-496f-a420-742c93e0d6a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.263350 kubelet[2689]: E0120 01:43:20.263176 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"896c437d-0a8d-496f-a420-742c93e0d6a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jd9dv" podUID="896c437d-0a8d-496f-a420-742c93e0d6a2" Jan 20 01:43:20.265774 containerd[1496]: time="2026-01-20T01:43:20.265610530Z" level=error msg="StopPodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" failed" error="failed to destroy network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:20.266470 kubelet[2689]: E0120 01:43:20.266283 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:20.266470 kubelet[2689]: E0120 01:43:20.266344 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb"} Jan 20 01:43:20.266470 kubelet[2689]: E0120 01:43:20.266386 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eedef20c-6169-4097-90af-4b5ed35e4c70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:20.266470 kubelet[2689]: E0120 01:43:20.266417 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eedef20c-6169-4097-90af-4b5ed35e4c70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:43:20.384302 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70-shm.mount: Deactivated successfully. Jan 20 01:43:20.384475 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5-shm.mount: Deactivated successfully. Jan 20 01:43:20.384602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb-shm.mount: Deactivated successfully. Jan 20 01:43:20.384762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06-shm.mount: Deactivated successfully. Jan 20 01:43:20.704877 kubelet[2689]: I0120 01:43:20.703550 2689 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 01:43:31.268755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1593886771.mount: Deactivated successfully. Jan 20 01:43:31.370681 containerd[1496]: time="2026-01-20T01:43:31.369122487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 20 01:43:31.383911 containerd[1496]: time="2026-01-20T01:43:31.382642947Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.463731001s" Jan 20 01:43:31.383911 containerd[1496]: time="2026-01-20T01:43:31.382733780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 20 01:43:31.414319 containerd[1496]: time="2026-01-20T01:43:31.414140446Z" level=info msg="CreateContainer within sandbox \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 01:43:31.422136 containerd[1496]: time="2026-01-20T01:43:31.421462034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:31.488089 containerd[1496]: time="2026-01-20T01:43:31.482151274Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:31.488089 containerd[1496]: time="2026-01-20T01:43:31.484603459Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 01:43:31.521399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703424749.mount: Deactivated successfully. Jan 20 01:43:31.533098 containerd[1496]: time="2026-01-20T01:43:31.533016026Z" level=info msg="CreateContainer within sandbox \"677b90dcb963bcba865bdba28e8c8ba6b166f2669cc71892f6be08918c6a241a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc\"" Jan 20 01:43:31.536848 containerd[1496]: time="2026-01-20T01:43:31.536679789Z" level=info msg="StartContainer for \"0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc\"" Jan 20 01:43:31.687789 containerd[1496]: time="2026-01-20T01:43:31.687715063Z" level=info msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" Jan 20 01:43:31.689583 containerd[1496]: time="2026-01-20T01:43:31.689535823Z" level=info msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" Jan 20 01:43:31.691763 containerd[1496]: time="2026-01-20T01:43:31.691422943Z" level=info msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" Jan 20 01:43:31.698699 containerd[1496]: time="2026-01-20T01:43:31.698633279Z" level=info msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" Jan 20 01:43:31.863342 systemd[1]: Started cri-containerd-0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc.scope - libcontainer container 0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc. Jan 20 01:43:31.969714 containerd[1496]: time="2026-01-20T01:43:31.967583446Z" level=error msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" failed" error="failed to destroy network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:31.979147 containerd[1496]: time="2026-01-20T01:43:31.977413303Z" level=error msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" failed" error="failed to destroy network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:31.983644 kubelet[2689]: E0120 01:43:31.982383 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:31.983644 kubelet[2689]: E0120 01:43:31.982542 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06"} Jan 20 01:43:31.985413 kubelet[2689]: E0120 01:43:31.985350 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"63686bdb-630e-4c31-bb10-61a7b178bd09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:31.986960 kubelet[2689]: E0120 01:43:31.985451 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"63686bdb-630e-4c31-bb10-61a7b178bd09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:43:31.993195 kubelet[2689]: E0120 01:43:31.968158 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:31.993195 kubelet[2689]: E0120 01:43:31.992946 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671"} Jan 20 01:43:31.993195 kubelet[2689]: E0120 01:43:31.993033 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5bb26b29-89e1-4055-a3dd-e9f6156c0d75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:31.994293 kubelet[2689]: E0120 01:43:31.993327 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5bb26b29-89e1-4055-a3dd-e9f6156c0d75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:43:32.018538 containerd[1496]: time="2026-01-20T01:43:32.018471511Z" level=error msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" failed" error="failed to destroy network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:32.019710 kubelet[2689]: E0120 01:43:32.019653 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:32.021049 kubelet[2689]: E0120 01:43:32.020900 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e"} Jan 20 01:43:32.021049 kubelet[2689]: E0120 01:43:32.020982 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"94aa1e8b-d364-40d2-9c05-39e890317a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:32.023426 kubelet[2689]: E0120 01:43:32.021054 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"94aa1e8b-d364-40d2-9c05-39e890317a94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gjtls" podUID="94aa1e8b-d364-40d2-9c05-39e890317a94" Jan 20 01:43:32.023426 kubelet[2689]: E0120 01:43:32.021486 2689 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:32.023426 kubelet[2689]: E0120 01:43:32.021537 2689 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42"} Jan 20 01:43:32.023426 kubelet[2689]: E0120 01:43:32.021576 2689 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 20 01:43:32.023826 containerd[1496]: time="2026-01-20T01:43:32.021198978Z" level=error msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" failed" error="failed to destroy network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 01:43:32.023924 kubelet[2689]: E0120 01:43:32.021610 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-596775c78f-n9sm2" podUID="dad1d3fa-2f8c-4259-b917-059c3b3e6572" Jan 20 01:43:32.078135 containerd[1496]: time="2026-01-20T01:43:32.076604428Z" level=info msg="StartContainer for \"0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc\" returns successfully" Jan 20 01:43:32.172612 kubelet[2689]: I0120 01:43:32.167925 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kb97z" podStartSLOduration=1.943227679 podStartE2EDuration="27.156136585s" podCreationTimestamp="2026-01-20 01:43:05 +0000 UTC" firstStartedPulling="2026-01-20 01:43:06.172350493 +0000 UTC m=+24.698777878" lastFinishedPulling="2026-01-20 01:43:31.385259401 +0000 UTC m=+49.911686784" observedRunningTime="2026-01-20 01:43:32.15404016 +0000 UTC m=+50.680467554" watchObservedRunningTime="2026-01-20 01:43:32.156136585 +0000 UTC m=+50.682563963" Jan 20 01:43:32.541822 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 01:43:32.543917 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 01:43:32.674587 containerd[1496]: time="2026-01-20T01:43:32.673474530Z" level=info msg="StopPodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\"" Jan 20 01:43:32.675286 containerd[1496]: time="2026-01-20T01:43:32.674588722Z" level=info msg="StopPodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\"" Jan 20 01:43:32.682694 containerd[1496]: time="2026-01-20T01:43:32.681442536Z" level=info msg="StopPodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\"" Jan 20 01:43:32.893164 containerd[1496]: time="2026-01-20T01:43:32.892970108Z" level=info msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" Jan 20 01:43:33.324603 systemd[1]: run-containerd-runc-k8s.io-0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc-runc.NlVHtR.mount: Deactivated successfully. Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.062 [INFO][4095] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.064 [INFO][4095] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" iface="eth0" netns="/var/run/netns/cni-6ab7c5c0-5b74-7ecc-e854-e7c1b3af6ab0" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.065 [INFO][4095] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" iface="eth0" netns="/var/run/netns/cni-6ab7c5c0-5b74-7ecc-e854-e7c1b3af6ab0" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.066 [INFO][4095] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" iface="eth0" netns="/var/run/netns/cni-6ab7c5c0-5b74-7ecc-e854-e7c1b3af6ab0" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.066 [INFO][4095] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.066 [INFO][4095] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.275 [INFO][4117] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.284 [INFO][4117] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.290 [INFO][4117] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.334 [WARNING][4117] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.334 [INFO][4117] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.338 [INFO][4117] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:33.350943 containerd[1496]: 2026-01-20 01:43:33.346 [INFO][4095] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:33.358948 containerd[1496]: time="2026-01-20T01:43:33.355100355Z" level=info msg="TearDown network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" successfully" Jan 20 01:43:33.358948 containerd[1496]: time="2026-01-20T01:43:33.355148310Z" level=info msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" returns successfully" Jan 20 01:43:33.356321 systemd[1]: run-netns-cni\x2d6ab7c5c0\x2d5b74\x2d7ecc\x2de854\x2de7c1b3af6ab0.mount: Deactivated successfully. Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:32.934 [INFO][4052] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:32.935 [INFO][4052] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" iface="eth0" netns="/var/run/netns/cni-ecd36a47-83e0-666f-844b-623e1f3fa460" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:32.936 [INFO][4052] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" iface="eth0" netns="/var/run/netns/cni-ecd36a47-83e0-666f-844b-623e1f3fa460" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:32.941 [INFO][4052] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" iface="eth0" netns="/var/run/netns/cni-ecd36a47-83e0-666f-844b-623e1f3fa460" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:32.941 [INFO][4052] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:32.941 [INFO][4052] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.298 [INFO][4102] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.299 [INFO][4102] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.338 [INFO][4102] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.354 [WARNING][4102] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.357 [INFO][4102] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.362 [INFO][4102] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:33.373870 containerd[1496]: 2026-01-20 01:43:33.367 [INFO][4052] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:33.376671 systemd[1]: run-netns-cni\x2decd36a47\x2d83e0\x2d666f\x2d844b\x2d623e1f3fa460.mount: Deactivated successfully. Jan 20 01:43:33.378385 containerd[1496]: time="2026-01-20T01:43:33.374967168Z" level=info msg="TearDown network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" successfully" Jan 20 01:43:33.378385 containerd[1496]: time="2026-01-20T01:43:33.378156707Z" level=info msg="StopPodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" returns successfully" Jan 20 01:43:33.380309 containerd[1496]: time="2026-01-20T01:43:33.380197787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jd9dv,Uid:896c437d-0a8d-496f-a420-742c93e0d6a2,Namespace:kube-system,Attempt:1,}" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:32.935 [INFO][4066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:32.936 [INFO][4066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" iface="eth0" netns="/var/run/netns/cni-be8d1193-aadd-86e6-0802-0c99672d798e" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:32.938 [INFO][4066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" iface="eth0" netns="/var/run/netns/cni-be8d1193-aadd-86e6-0802-0c99672d798e" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:32.939 [INFO][4066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" iface="eth0" netns="/var/run/netns/cni-be8d1193-aadd-86e6-0802-0c99672d798e" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:32.939 [INFO][4066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:32.939 [INFO][4066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.308 [INFO][4100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.312 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.362 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.388 [WARNING][4100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.388 [INFO][4100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.392 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:33.408922 containerd[1496]: 2026-01-20 01:43:33.399 [INFO][4066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:33.409826 containerd[1496]: time="2026-01-20T01:43:33.409302936Z" level=info msg="TearDown network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" successfully" Jan 20 01:43:33.409826 containerd[1496]: time="2026-01-20T01:43:33.409364654Z" level=info msg="StopPodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" returns successfully" Jan 20 01:43:33.414867 containerd[1496]: time="2026-01-20T01:43:33.414324541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w59jj,Uid:c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd,Namespace:calico-system,Attempt:1,}" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:32.944 [INFO][4065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:32.945 [INFO][4065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" iface="eth0" netns="/var/run/netns/cni-ef0e5023-53c8-54f2-7127-099e9e09d9f5" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:32.946 [INFO][4065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" iface="eth0" netns="/var/run/netns/cni-ef0e5023-53c8-54f2-7127-099e9e09d9f5" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:32.946 [INFO][4065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" iface="eth0" netns="/var/run/netns/cni-ef0e5023-53c8-54f2-7127-099e9e09d9f5" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:32.947 [INFO][4065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:32.947 [INFO][4065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.314 [INFO][4104] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.315 [INFO][4104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.399 [INFO][4104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.436 [WARNING][4104] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.436 [INFO][4104] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.446 [INFO][4104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:33.471122 containerd[1496]: 2026-01-20 01:43:33.448 [INFO][4065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:33.474545 containerd[1496]: time="2026-01-20T01:43:33.474400350Z" level=info msg="TearDown network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" successfully" Jan 20 01:43:33.474545 containerd[1496]: time="2026-01-20T01:43:33.474458973Z" level=info msg="StopPodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" returns successfully" Jan 20 01:43:33.478859 containerd[1496]: time="2026-01-20T01:43:33.477717702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kt727,Uid:7f445973-85d0-4221-8af9-3dc0c3aa4878,Namespace:calico-system,Attempt:1,}" Jan 20 01:43:33.554992 kubelet[2689]: I0120 01:43:33.554895 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t4jg\" (UniqueName: \"kubernetes.io/projected/dad1d3fa-2f8c-4259-b917-059c3b3e6572-kube-api-access-8t4jg\") pod \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" (UID: \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\") " Jan 20 01:43:33.555723 kubelet[2689]: I0120 01:43:33.555017 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-backend-key-pair\") pod \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" (UID: \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\") " Jan 20 01:43:33.558066 kubelet[2689]: I0120 01:43:33.558018 2689 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-ca-bundle\") pod \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\" (UID: \"dad1d3fa-2f8c-4259-b917-059c3b3e6572\") " Jan 20 01:43:33.596037 kubelet[2689]: I0120 01:43:33.579001 2689 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "dad1d3fa-2f8c-4259-b917-059c3b3e6572" (UID: "dad1d3fa-2f8c-4259-b917-059c3b3e6572"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 01:43:33.612237 kubelet[2689]: I0120 01:43:33.612164 2689 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad1d3fa-2f8c-4259-b917-059c3b3e6572-kube-api-access-8t4jg" (OuterVolumeSpecName: "kube-api-access-8t4jg") pod "dad1d3fa-2f8c-4259-b917-059c3b3e6572" (UID: "dad1d3fa-2f8c-4259-b917-059c3b3e6572"). InnerVolumeSpecName "kube-api-access-8t4jg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 01:43:33.612450 kubelet[2689]: I0120 01:43:33.612126 2689 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "dad1d3fa-2f8c-4259-b917-059c3b3e6572" (UID: "dad1d3fa-2f8c-4259-b917-059c3b3e6572"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 01:43:33.658907 kubelet[2689]: I0120 01:43:33.658512 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-backend-key-pair\") on node \"srv-vpmg3.gb1.brightbox.com\" DevicePath \"\"" Jan 20 01:43:33.658907 kubelet[2689]: I0120 01:43:33.658573 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dad1d3fa-2f8c-4259-b917-059c3b3e6572-whisker-ca-bundle\") on node \"srv-vpmg3.gb1.brightbox.com\" DevicePath \"\"" Jan 20 01:43:33.658907 kubelet[2689]: I0120 01:43:33.658592 2689 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8t4jg\" (UniqueName: \"kubernetes.io/projected/dad1d3fa-2f8c-4259-b917-059c3b3e6572-kube-api-access-8t4jg\") on node \"srv-vpmg3.gb1.brightbox.com\" DevicePath \"\"" Jan 20 01:43:33.731386 systemd[1]: Removed slice kubepods-besteffort-poddad1d3fa_2f8c_4259_b917_059c3b3e6572.slice - libcontainer container kubepods-besteffort-poddad1d3fa_2f8c_4259_b917_059c3b3e6572.slice. Jan 20 01:43:34.115636 systemd-networkd[1434]: cali43c5a84df5e: Link UP Jan 20 01:43:34.125236 systemd-networkd[1434]: cali43c5a84df5e: Gained carrier Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.717 [INFO][4156] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.774 [INFO][4156] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0 csi-node-driver- calico-system c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd 950 0 2026-01-20 01:43:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com csi-node-driver-w59jj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali43c5a84df5e [] [] }} ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.779 [INFO][4156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.889 [INFO][4192] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" HandleID="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.892 [INFO][4192] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" HandleID="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033d120), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"csi-node-driver-w59jj", "timestamp":"2026-01-20 01:43:33.889313827 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.892 [INFO][4192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.892 [INFO][4192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.892 [INFO][4192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.938 [INFO][4192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.958 [INFO][4192] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.979 [INFO][4192] ipam/ipam.go 543: Ran out of existing affine blocks for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.983 [INFO][4192] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.987 [INFO][4192] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.987 [INFO][4192] ipam/ipam.go 572: Found unclaimed block host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.987 [INFO][4192] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.996 [INFO][4192] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:33.996 [INFO][4192] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.004 [INFO][4192] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.015 [INFO][4192] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.018 [INFO][4192] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.018 [INFO][4192] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.026 [INFO][4192] ipam/ipam_block_reader_writer.go 267: Successfully created block Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.026 [INFO][4192] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.032 [INFO][4192] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.032 [INFO][4192] ipam/ipam.go 607: Block '192.168.21.128/26' has 64 free ips which is more than 1 ips required. host="srv-vpmg3.gb1.brightbox.com" subnet=192.168.21.128/26 Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.032 [INFO][4192] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.195639 containerd[1496]: 2026-01-20 01:43:34.035 [INFO][4192] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.042 [INFO][4192] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.051 [INFO][4192] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.128/26] block=192.168.21.128/26 handle="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.052 [INFO][4192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.128/26] handle="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.053 [INFO][4192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.053 [INFO][4192] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.128/26] IPv6=[] ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" HandleID="k8s-pod-network.c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.059 [INFO][4156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-w59jj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43c5a84df5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.060 [INFO][4156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.128/32] ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.060 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43c5a84df5e ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.133 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.201706 containerd[1496]: 2026-01-20 01:43:34.137 [INFO][4156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db", Pod:"csi-node-driver-w59jj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43c5a84df5e", MAC:"b6:a1:ea:67:2f:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:34.202796 containerd[1496]: 2026-01-20 01:43:34.181 [INFO][4156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db" Namespace="calico-system" Pod="csi-node-driver-w59jj" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:34.325382 systemd[1]: run-netns-cni\x2dbe8d1193\x2daadd\x2d86e6\x2d0802\x2d0c99672d798e.mount: Deactivated successfully. Jan 20 01:43:34.325568 systemd[1]: run-netns-cni\x2def0e5023\x2d53c8\x2d54f2\x2d7127\x2d099e9e09d9f5.mount: Deactivated successfully. Jan 20 01:43:34.325696 systemd[1]: var-lib-kubelet-pods-dad1d3fa\x2d2f8c\x2d4259\x2db917\x2d059c3b3e6572-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8t4jg.mount: Deactivated successfully. Jan 20 01:43:34.326732 systemd[1]: var-lib-kubelet-pods-dad1d3fa\x2d2f8c\x2d4259\x2db917\x2d059c3b3e6572-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 01:43:34.338337 systemd-networkd[1434]: calieb89d6df6bf: Link UP Jan 20 01:43:34.338955 systemd-networkd[1434]: calieb89d6df6bf: Gained carrier Jan 20 01:43:34.366809 containerd[1496]: time="2026-01-20T01:43:34.365289851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:34.366809 containerd[1496]: time="2026-01-20T01:43:34.365431564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:34.366809 containerd[1496]: time="2026-01-20T01:43:34.365470220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:34.366809 containerd[1496]: time="2026-01-20T01:43:34.365683655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:34.414360 systemd[1]: run-containerd-runc-k8s.io-c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db-runc.S2wZPB.mount: Deactivated successfully. Jan 20 01:43:34.429100 systemd[1]: Started cri-containerd-c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db.scope - libcontainer container c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db. Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:33.630 [INFO][4151] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:33.692 [INFO][4151] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0 coredns-668d6bf9bc- kube-system 896c437d-0a8d-496f-a420-742c93e0d6a2 951 0 2026-01-20 01:42:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com coredns-668d6bf9bc-jd9dv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieb89d6df6bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:33.692 [INFO][4151] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:33.911 [INFO][4184] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" HandleID="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:33.912 [INFO][4184] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" HandleID="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e460), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-jd9dv", "timestamp":"2026-01-20 01:43:33.911326699 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:33.912 [INFO][4184] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.052 [INFO][4184] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.052 [INFO][4184] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.067 [INFO][4184] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.106 [INFO][4184] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.137 [INFO][4184] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.147 [INFO][4184] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.163 [INFO][4184] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.166 [INFO][4184] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.184 [INFO][4184] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6 Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.200 [INFO][4184] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.228 [INFO][4184] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.129/26] block=192.168.21.128/26 handle="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.229 [INFO][4184] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.129/26] handle="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.229 [INFO][4184] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:34.443044 containerd[1496]: 2026-01-20 01:43:34.230 [INFO][4184] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.129/26] IPv6=[] ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" HandleID="k8s-pod-network.1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.444779 containerd[1496]: 2026-01-20 01:43:34.251 [INFO][4151] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"896c437d-0a8d-496f-a420-742c93e0d6a2", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-jd9dv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb89d6df6bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:34.444779 containerd[1496]: 2026-01-20 01:43:34.253 [INFO][4151] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.129/32] ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.444779 containerd[1496]: 2026-01-20 01:43:34.253 [INFO][4151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb89d6df6bf ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.444779 containerd[1496]: 2026-01-20 01:43:34.352 [INFO][4151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.444779 containerd[1496]: 2026-01-20 01:43:34.368 [INFO][4151] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"896c437d-0a8d-496f-a420-742c93e0d6a2", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6", Pod:"coredns-668d6bf9bc-jd9dv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb89d6df6bf", MAC:"7e:0e:3b:f1:60:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:34.444779 containerd[1496]: 2026-01-20 01:43:34.432 [INFO][4151] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6" Namespace="kube-system" Pod="coredns-668d6bf9bc-jd9dv" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:34.524463 containerd[1496]: time="2026-01-20T01:43:34.524217638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:34.524463 containerd[1496]: time="2026-01-20T01:43:34.524381534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:34.524463 containerd[1496]: time="2026-01-20T01:43:34.524409889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:34.528762 containerd[1496]: time="2026-01-20T01:43:34.528647376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:34.578610 systemd[1]: Created slice kubepods-besteffort-poddd0de801_e3e8_44b8_afed_383a8eb729ca.slice - libcontainer container kubepods-besteffort-poddd0de801_e3e8_44b8_afed_383a8eb729ca.slice. Jan 20 01:43:34.599054 systemd[1]: Started cri-containerd-1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6.scope - libcontainer container 1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6. Jan 20 01:43:34.674819 systemd-networkd[1434]: cali6d157e1114b: Link UP Jan 20 01:43:34.676309 systemd-networkd[1434]: cali6d157e1114b: Gained carrier Jan 20 01:43:34.689795 containerd[1496]: time="2026-01-20T01:43:34.688489534Z" level=info msg="StopPodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\"" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:33.710 [INFO][4168] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:33.776 [INFO][4168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0 goldmane-666569f655- calico-system 7f445973-85d0-4221-8af9-3dc0c3aa4878 952 0 2026-01-20 01:43:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com goldmane-666569f655-kt727 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d157e1114b [] [] }} ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:33.776 [INFO][4168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:33.944 [INFO][4190] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" HandleID="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:33.944 [INFO][4190] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" HandleID="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d00), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"goldmane-666569f655-kt727", "timestamp":"2026-01-20 01:43:33.944478875 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:33.944 [INFO][4190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.235 [INFO][4190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.236 [INFO][4190] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.360 [INFO][4190] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.395 [INFO][4190] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.468 [INFO][4190] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.494 [INFO][4190] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.531 [INFO][4190] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.531 [INFO][4190] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.565 [INFO][4190] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56 Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.620 [INFO][4190] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.654 [INFO][4190] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.130/26] block=192.168.21.128/26 handle="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.654 [INFO][4190] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.130/26] handle="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.655 [INFO][4190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:34.768936 containerd[1496]: 2026-01-20 01:43:34.655 [INFO][4190] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.130/26] IPv6=[] ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" HandleID="k8s-pod-network.ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.771306 containerd[1496]: 2026-01-20 01:43:34.662 [INFO][4168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f445973-85d0-4221-8af9-3dc0c3aa4878", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-kt727", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d157e1114b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:34.771306 containerd[1496]: 2026-01-20 01:43:34.665 [INFO][4168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.130/32] ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.771306 containerd[1496]: 2026-01-20 01:43:34.665 [INFO][4168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d157e1114b ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.771306 containerd[1496]: 2026-01-20 01:43:34.680 [INFO][4168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.771306 containerd[1496]: 2026-01-20 01:43:34.680 [INFO][4168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f445973-85d0-4221-8af9-3dc0c3aa4878", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56", Pod:"goldmane-666569f655-kt727", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d157e1114b", MAC:"3a:b1:15:b2:d6:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:34.771306 containerd[1496]: 2026-01-20 01:43:34.760 [INFO][4168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56" Namespace="calico-system" Pod="goldmane-666569f655-kt727" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:34.771700 kubelet[2689]: I0120 01:43:34.771381 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd0de801-e3e8-44b8-afed-383a8eb729ca-whisker-backend-key-pair\") pod \"whisker-6df6c9ff7-pskf4\" (UID: \"dd0de801-e3e8-44b8-afed-383a8eb729ca\") " pod="calico-system/whisker-6df6c9ff7-pskf4" Jan 20 01:43:34.771700 kubelet[2689]: I0120 01:43:34.771563 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd0de801-e3e8-44b8-afed-383a8eb729ca-whisker-ca-bundle\") pod \"whisker-6df6c9ff7-pskf4\" (UID: \"dd0de801-e3e8-44b8-afed-383a8eb729ca\") " pod="calico-system/whisker-6df6c9ff7-pskf4" Jan 20 01:43:34.771700 kubelet[2689]: I0120 01:43:34.771618 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzvnm\" (UniqueName: \"kubernetes.io/projected/dd0de801-e3e8-44b8-afed-383a8eb729ca-kube-api-access-vzvnm\") pod \"whisker-6df6c9ff7-pskf4\" (UID: \"dd0de801-e3e8-44b8-afed-383a8eb729ca\") " pod="calico-system/whisker-6df6c9ff7-pskf4" Jan 20 01:43:34.914224 containerd[1496]: time="2026-01-20T01:43:34.914087525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w59jj,Uid:c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd,Namespace:calico-system,Attempt:1,} returns sandbox id \"c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db\"" Jan 20 01:43:34.914968 containerd[1496]: time="2026-01-20T01:43:34.914496299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jd9dv,Uid:896c437d-0a8d-496f-a420-742c93e0d6a2,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6\"" Jan 20 01:43:34.937891 containerd[1496]: time="2026-01-20T01:43:34.933224323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:34.937891 containerd[1496]: time="2026-01-20T01:43:34.933349116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:34.937891 containerd[1496]: time="2026-01-20T01:43:34.933488427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:34.937891 containerd[1496]: time="2026-01-20T01:43:34.934043235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:34.950139 containerd[1496]: time="2026-01-20T01:43:34.949957175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:43:34.958108 containerd[1496]: time="2026-01-20T01:43:34.957678511Z" level=info msg="CreateContainer within sandbox \"1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:43:35.008102 systemd[1]: Started cri-containerd-ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56.scope - libcontainer container ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56. Jan 20 01:43:35.082791 containerd[1496]: time="2026-01-20T01:43:35.082717416Z" level=info msg="CreateContainer within sandbox \"1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f59f5f5b66df615c0ee04577781bf718ed12bcc533d7e95f86076321703a75ae\"" Jan 20 01:43:35.086027 containerd[1496]: time="2026-01-20T01:43:35.085878310Z" level=info msg="StartContainer for \"f59f5f5b66df615c0ee04577781bf718ed12bcc533d7e95f86076321703a75ae\"" Jan 20 01:43:35.193606 containerd[1496]: time="2026-01-20T01:43:35.192581442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df6c9ff7-pskf4,Uid:dd0de801-e3e8-44b8-afed-383a8eb729ca,Namespace:calico-system,Attempt:0,}" Jan 20 01:43:35.255076 systemd[1]: Started cri-containerd-f59f5f5b66df615c0ee04577781bf718ed12bcc533d7e95f86076321703a75ae.scope - libcontainer container f59f5f5b66df615c0ee04577781bf718ed12bcc533d7e95f86076321703a75ae. Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:34.905 [INFO][4304] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:34.914 [INFO][4304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" iface="eth0" netns="/var/run/netns/cni-096c8ae4-2e59-40ea-dbfe-71a6c85412fd" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:34.915 [INFO][4304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" iface="eth0" netns="/var/run/netns/cni-096c8ae4-2e59-40ea-dbfe-71a6c85412fd" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:34.919 [INFO][4304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" iface="eth0" netns="/var/run/netns/cni-096c8ae4-2e59-40ea-dbfe-71a6c85412fd" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:34.920 [INFO][4304] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:34.920 [INFO][4304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.147 [INFO][4349] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.149 [INFO][4349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.149 [INFO][4349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.184 [WARNING][4349] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.185 [INFO][4349] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.203 [INFO][4349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:35.270148 containerd[1496]: 2026-01-20 01:43:35.218 [INFO][4304] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:35.272464 containerd[1496]: time="2026-01-20T01:43:35.271615793Z" level=info msg="TearDown network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" successfully" Jan 20 01:43:35.272464 containerd[1496]: time="2026-01-20T01:43:35.271664761Z" level=info msg="StopPodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" returns successfully" Jan 20 01:43:35.278243 containerd[1496]: time="2026-01-20T01:43:35.277210910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-fhvkc,Uid:573ad695-5762-4b18-9450-3954cd6448a6,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:43:35.329936 containerd[1496]: time="2026-01-20T01:43:35.329881803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kt727,Uid:7f445973-85d0-4221-8af9-3dc0c3aa4878,Namespace:calico-system,Attempt:1,} returns sandbox id \"ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56\"" Jan 20 01:43:35.333801 systemd[1]: run-netns-cni\x2d096c8ae4\x2d2e59\x2d40ea\x2ddbfe\x2d71a6c85412fd.mount: Deactivated successfully. Jan 20 01:43:35.363074 containerd[1496]: time="2026-01-20T01:43:35.362953377Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:35.396623 containerd[1496]: time="2026-01-20T01:43:35.369921966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:43:35.396920 containerd[1496]: time="2026-01-20T01:43:35.370651021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:43:35.405428 kubelet[2689]: E0120 01:43:35.404500 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:35.406264 kubelet[2689]: E0120 01:43:35.406231 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:35.409233 containerd[1496]: time="2026-01-20T01:43:35.409174894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:43:35.437969 containerd[1496]: time="2026-01-20T01:43:35.437107141Z" level=info msg="StartContainer for \"f59f5f5b66df615c0ee04577781bf718ed12bcc533d7e95f86076321703a75ae\" returns successfully" Jan 20 01:43:35.438872 kubelet[2689]: E0120 01:43:35.438637 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:35.686765 systemd[1]: run-containerd-runc-k8s.io-0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc-runc.Dhvipq.mount: Deactivated successfully. Jan 20 01:43:35.704386 systemd-networkd[1434]: caliafd8a659f75: Link UP Jan 20 01:43:35.707894 systemd-networkd[1434]: caliafd8a659f75: Gained carrier Jan 20 01:43:35.712807 kubelet[2689]: I0120 01:43:35.712649 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad1d3fa-2f8c-4259-b917-059c3b3e6572" path="/var/lib/kubelet/pods/dad1d3fa-2f8c-4259-b917-059c3b3e6572/volumes" Jan 20 01:43:35.720098 containerd[1496]: time="2026-01-20T01:43:35.719598244Z" level=info msg="StopPodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\"" Jan 20 01:43:35.746652 systemd-networkd[1434]: calieb89d6df6bf: Gained IPv6LL Jan 20 01:43:35.768399 containerd[1496]: time="2026-01-20T01:43:35.768255348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:35.773323 containerd[1496]: time="2026-01-20T01:43:35.773197097Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:43:35.774554 containerd[1496]: time="2026-01-20T01:43:35.773623733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:35.774705 kubelet[2689]: E0120 01:43:35.773985 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:35.774705 kubelet[2689]: E0120 01:43:35.774085 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:35.774705 kubelet[2689]: E0120 01:43:35.774500 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zx2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kt727_calico-system(7f445973-85d0-4221-8af9-3dc0c3aa4878): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:35.778107 kubelet[2689]: E0120 01:43:35.776358 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:43:35.778260 containerd[1496]: time="2026-01-20T01:43:35.776625832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.466 [INFO][4419] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.504 [INFO][4419] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0 calico-apiserver-799b8f498b- calico-apiserver 573ad695-5762-4b18-9450-3954cd6448a6 992 0 2026-01-20 01:42:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799b8f498b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com calico-apiserver-799b8f498b-fhvkc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliafd8a659f75 [] [] }} ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.504 [INFO][4419] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.573 [INFO][4451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" HandleID="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.573 [INFO][4451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" HandleID="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5d60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"calico-apiserver-799b8f498b-fhvkc", "timestamp":"2026-01-20 01:43:35.573186283 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.573 [INFO][4451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.573 [INFO][4451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.573 [INFO][4451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.593 [INFO][4451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.606 [INFO][4451] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.615 [INFO][4451] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.619 [INFO][4451] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.624 [INFO][4451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.624 [INFO][4451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.627 [INFO][4451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562 Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.636 [INFO][4451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.656 [INFO][4451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.132/26] block=192.168.21.128/26 handle="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.656 [INFO][4451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.132/26] handle="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.656 [INFO][4451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:35.778372 containerd[1496]: 2026-01-20 01:43:35.656 [INFO][4451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.132/26] IPv6=[] ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" HandleID="k8s-pod-network.dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.783012 containerd[1496]: 2026-01-20 01:43:35.667 [INFO][4419] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573ad695-5762-4b18-9450-3954cd6448a6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-799b8f498b-fhvkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafd8a659f75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:35.783012 containerd[1496]: 2026-01-20 01:43:35.677 [INFO][4419] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.132/32] ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.783012 containerd[1496]: 2026-01-20 01:43:35.677 [INFO][4419] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliafd8a659f75 ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.783012 containerd[1496]: 2026-01-20 01:43:35.712 [INFO][4419] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.783012 containerd[1496]: 2026-01-20 01:43:35.716 [INFO][4419] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573ad695-5762-4b18-9450-3954cd6448a6", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562", Pod:"calico-apiserver-799b8f498b-fhvkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafd8a659f75", MAC:"ca:72:20:26:0e:c2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:35.783012 containerd[1496]: 2026-01-20 01:43:35.759 [INFO][4419] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-fhvkc" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:35.871628 systemd-networkd[1434]: cali43c5a84df5e: Gained IPv6LL Jan 20 01:43:35.887167 containerd[1496]: time="2026-01-20T01:43:35.885016488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:35.887167 containerd[1496]: time="2026-01-20T01:43:35.885143558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:35.887167 containerd[1496]: time="2026-01-20T01:43:35.885215534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:35.887167 containerd[1496]: time="2026-01-20T01:43:35.885386087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:35.943742 systemd-networkd[1434]: calid03275a4422: Link UP Jan 20 01:43:35.949106 systemd-networkd[1434]: calid03275a4422: Gained carrier Jan 20 01:43:36.033062 systemd[1]: Started cri-containerd-dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562.scope - libcontainer container dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562. Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.351 [INFO][4393] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.399 [INFO][4393] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0 whisker-6df6c9ff7- calico-system dd0de801-e3e8-44b8-afed-383a8eb729ca 985 0 2026-01-20 01:43:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6df6c9ff7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com whisker-6df6c9ff7-pskf4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid03275a4422 [] [] }} ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.400 [INFO][4393] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.583 [INFO][4435] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" HandleID="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.583 [INFO][4435] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" HandleID="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd20), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"whisker-6df6c9ff7-pskf4", "timestamp":"2026-01-20 01:43:35.58328105 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.583 [INFO][4435] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.658 [INFO][4435] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.658 [INFO][4435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.701 [INFO][4435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.751 [INFO][4435] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.800 [INFO][4435] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.811 [INFO][4435] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.816 [INFO][4435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.817 [INFO][4435] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.822 [INFO][4435] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7 Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.835 [INFO][4435] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.858 [INFO][4435] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.133/26] block=192.168.21.128/26 handle="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.858 [INFO][4435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.133/26] handle="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.858 [INFO][4435] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:36.043265 containerd[1496]: 2026-01-20 01:43:35.859 [INFO][4435] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.133/26] IPv6=[] ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" HandleID="k8s-pod-network.466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.049577 containerd[1496]: 2026-01-20 01:43:35.870 [INFO][4393] cni-plugin/k8s.go 418: Populated endpoint ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0", GenerateName:"whisker-6df6c9ff7-", Namespace:"calico-system", SelfLink:"", UID:"dd0de801-e3e8-44b8-afed-383a8eb729ca", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df6c9ff7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6df6c9ff7-pskf4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid03275a4422", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:36.049577 containerd[1496]: 2026-01-20 01:43:35.870 [INFO][4393] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.133/32] ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.049577 containerd[1496]: 2026-01-20 01:43:35.870 [INFO][4393] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid03275a4422 ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.049577 containerd[1496]: 2026-01-20 01:43:35.955 [INFO][4393] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.049577 containerd[1496]: 2026-01-20 01:43:35.956 [INFO][4393] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0", GenerateName:"whisker-6df6c9ff7-", Namespace:"calico-system", SelfLink:"", UID:"dd0de801-e3e8-44b8-afed-383a8eb729ca", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6df6c9ff7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7", Pod:"whisker-6df6c9ff7-pskf4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid03275a4422", MAC:"da:db:4c:c8:58:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:36.049577 containerd[1496]: 2026-01-20 01:43:36.021 [INFO][4393] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7" Namespace="calico-system" Pod="whisker-6df6c9ff7-pskf4" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--6df6c9ff7--pskf4-eth0" Jan 20 01:43:36.132688 containerd[1496]: time="2026-01-20T01:43:36.132492876Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:36.134384 containerd[1496]: time="2026-01-20T01:43:36.134173261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:43:36.134384 containerd[1496]: time="2026-01-20T01:43:36.134335368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:43:36.135151 kubelet[2689]: E0120 01:43:36.134964 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:36.135369 kubelet[2689]: E0120 01:43:36.135330 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:36.136565 kubelet[2689]: E0120 01:43:36.136173 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:36.140980 kubelet[2689]: E0120 01:43:36.140872 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:36.157338 containerd[1496]: time="2026-01-20T01:43:36.157069159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:36.158768 containerd[1496]: time="2026-01-20T01:43:36.157393904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:36.158768 containerd[1496]: time="2026-01-20T01:43:36.157416102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:36.158768 containerd[1496]: time="2026-01-20T01:43:36.158382569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:36.191202 systemd-networkd[1434]: cali6d157e1114b: Gained IPv6LL Jan 20 01:43:36.195991 kubelet[2689]: E0120 01:43:36.193508 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:35.970 [INFO][4489] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:35.970 [INFO][4489] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" iface="eth0" netns="/var/run/netns/cni-042f11b1-bb9c-cc5e-f43f-78b61ab7512a" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:35.971 [INFO][4489] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" iface="eth0" netns="/var/run/netns/cni-042f11b1-bb9c-cc5e-f43f-78b61ab7512a" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:35.974 [INFO][4489] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" iface="eth0" netns="/var/run/netns/cni-042f11b1-bb9c-cc5e-f43f-78b61ab7512a" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:35.974 [INFO][4489] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:35.974 [INFO][4489] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.152 [INFO][4525] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.152 [INFO][4525] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.152 [INFO][4525] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.175 [WARNING][4525] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.175 [INFO][4525] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.178 [INFO][4525] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:36.197391 containerd[1496]: 2026-01-20 01:43:36.184 [INFO][4489] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:36.199953 containerd[1496]: time="2026-01-20T01:43:36.198349583Z" level=info msg="TearDown network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" successfully" Jan 20 01:43:36.199953 containerd[1496]: time="2026-01-20T01:43:36.198522618Z" level=info msg="StopPodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" returns successfully" Jan 20 01:43:36.200661 containerd[1496]: time="2026-01-20T01:43:36.200592074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849c94fcc7-89lqr,Uid:eedef20c-6169-4097-90af-4b5ed35e4c70,Namespace:calico-system,Attempt:1,}" Jan 20 01:43:36.227382 kubelet[2689]: E0120 01:43:36.227243 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:36.257093 systemd[1]: Started cri-containerd-466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7.scope - libcontainer container 466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7. Jan 20 01:43:36.327179 systemd[1]: run-netns-cni\x2d042f11b1\x2dbb9c\x2dcc5e\x2df43f\x2d78b61ab7512a.mount: Deactivated successfully. Jan 20 01:43:36.465883 containerd[1496]: time="2026-01-20T01:43:36.463707383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-fhvkc,Uid:573ad695-5762-4b18-9450-3954cd6448a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562\"" Jan 20 01:43:36.470980 containerd[1496]: time="2026-01-20T01:43:36.470800848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:36.560983 systemd-networkd[1434]: cali8e20fe8338e: Link UP Jan 20 01:43:36.564200 systemd-networkd[1434]: cali8e20fe8338e: Gained carrier Jan 20 01:43:36.584122 kubelet[2689]: I0120 01:43:36.584011 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jd9dv" podStartSLOduration=49.583949756 podStartE2EDuration="49.583949756s" podCreationTimestamp="2026-01-20 01:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:43:36.339462287 +0000 UTC m=+54.865889698" watchObservedRunningTime="2026-01-20 01:43:36.583949756 +0000 UTC m=+55.110377148" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.308 [INFO][4577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.348 [INFO][4577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0 calico-kube-controllers-849c94fcc7- calico-system eedef20c-6169-4097-90af-4b5ed35e4c70 1015 0 2026-01-20 01:43:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:849c94fcc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com calico-kube-controllers-849c94fcc7-89lqr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8e20fe8338e [] [] }} ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.349 [INFO][4577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.449 [INFO][4593] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" HandleID="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.451 [INFO][4593] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" HandleID="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"calico-kube-controllers-849c94fcc7-89lqr", "timestamp":"2026-01-20 01:43:36.449772223 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.451 [INFO][4593] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.451 [INFO][4593] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.451 [INFO][4593] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.476 [INFO][4593] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.485 [INFO][4593] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.494 [INFO][4593] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.500 [INFO][4593] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.513 [INFO][4593] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.514 [INFO][4593] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.519 [INFO][4593] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1 Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.529 [INFO][4593] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.544 [INFO][4593] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.134/26] block=192.168.21.128/26 handle="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.544 [INFO][4593] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.134/26] handle="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.544 [INFO][4593] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:36.589532 containerd[1496]: 2026-01-20 01:43:36.544 [INFO][4593] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.134/26] IPv6=[] ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" HandleID="k8s-pod-network.7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.593029 containerd[1496]: 2026-01-20 01:43:36.549 [INFO][4577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0", GenerateName:"calico-kube-controllers-849c94fcc7-", Namespace:"calico-system", SelfLink:"", UID:"eedef20c-6169-4097-90af-4b5ed35e4c70", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849c94fcc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-849c94fcc7-89lqr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e20fe8338e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:36.593029 containerd[1496]: 2026-01-20 01:43:36.550 [INFO][4577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.134/32] ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.593029 containerd[1496]: 2026-01-20 01:43:36.550 [INFO][4577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8e20fe8338e ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.593029 containerd[1496]: 2026-01-20 01:43:36.563 [INFO][4577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.593029 containerd[1496]: 2026-01-20 01:43:36.566 [INFO][4577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0", GenerateName:"calico-kube-controllers-849c94fcc7-", Namespace:"calico-system", SelfLink:"", UID:"eedef20c-6169-4097-90af-4b5ed35e4c70", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849c94fcc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1", Pod:"calico-kube-controllers-849c94fcc7-89lqr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e20fe8338e", MAC:"b6:c6:3d:c4:46:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:36.593029 containerd[1496]: 2026-01-20 01:43:36.586 [INFO][4577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1" Namespace="calico-system" Pod="calico-kube-controllers-849c94fcc7-89lqr" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:36.641874 containerd[1496]: time="2026-01-20T01:43:36.640852320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:36.642292 containerd[1496]: time="2026-01-20T01:43:36.641901115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:36.642292 containerd[1496]: time="2026-01-20T01:43:36.641936012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:36.642292 containerd[1496]: time="2026-01-20T01:43:36.642125128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:36.698343 systemd[1]: Started cri-containerd-7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1.scope - libcontainer container 7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1. Jan 20 01:43:36.741776 containerd[1496]: time="2026-01-20T01:43:36.741432285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6df6c9ff7-pskf4,Uid:dd0de801-e3e8-44b8-afed-383a8eb729ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"466ddc86aab6711d25e4be68371bad2fdf9d4f12d5915380387c92d529d88ea7\"" Jan 20 01:43:36.789064 containerd[1496]: time="2026-01-20T01:43:36.788988246Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:36.790310 containerd[1496]: time="2026-01-20T01:43:36.790261226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:36.790468 containerd[1496]: time="2026-01-20T01:43:36.790389953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:36.790928 kubelet[2689]: E0120 01:43:36.790794 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:36.791391 kubelet[2689]: E0120 01:43:36.790908 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:36.793481 kubelet[2689]: E0120 01:43:36.791364 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44szk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-fhvkc_calico-apiserver(573ad695-5762-4b18-9450-3954cd6448a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:36.793481 kubelet[2689]: E0120 01:43:36.793155 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:43:36.795198 containerd[1496]: time="2026-01-20T01:43:36.794783330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:43:36.889197 containerd[1496]: time="2026-01-20T01:43:36.889134361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-849c94fcc7-89lqr,Uid:eedef20c-6169-4097-90af-4b5ed35e4c70,Namespace:calico-system,Attempt:1,} returns sandbox id \"7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1\"" Jan 20 01:43:36.959083 systemd-networkd[1434]: caliafd8a659f75: Gained IPv6LL Jan 20 01:43:37.108281 containerd[1496]: time="2026-01-20T01:43:37.108015136Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:37.109446 containerd[1496]: time="2026-01-20T01:43:37.109274926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:43:37.109446 containerd[1496]: time="2026-01-20T01:43:37.109369438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:43:37.109776 kubelet[2689]: E0120 01:43:37.109702 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:37.110673 kubelet[2689]: E0120 01:43:37.109790 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:37.110673 kubelet[2689]: E0120 01:43:37.110157 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a4b17c258084135abe35c802ee47f41,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:37.111459 containerd[1496]: time="2026-01-20T01:43:37.111409286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:43:37.226299 kubelet[2689]: E0120 01:43:37.225460 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:43:37.239348 kubelet[2689]: E0120 01:43:37.239247 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:43:37.343219 systemd-networkd[1434]: calid03275a4422: Gained IPv6LL Jan 20 01:43:37.440154 containerd[1496]: time="2026-01-20T01:43:37.439979195Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:37.442156 containerd[1496]: time="2026-01-20T01:43:37.441976702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:43:37.442156 containerd[1496]: time="2026-01-20T01:43:37.442050261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:37.442853 kubelet[2689]: E0120 01:43:37.442492 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:37.442853 kubelet[2689]: E0120 01:43:37.442650 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:37.444472 containerd[1496]: time="2026-01-20T01:43:37.443775089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:43:37.452374 kubelet[2689]: E0120 01:43:37.451487 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttbj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849c94fcc7-89lqr_calico-system(eedef20c-6169-4097-90af-4b5ed35e4c70): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:37.455257 kubelet[2689]: E0120 01:43:37.455189 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:43:37.663258 systemd-networkd[1434]: cali8e20fe8338e: Gained IPv6LL Jan 20 01:43:37.703889 kernel: bpftool[4791]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 20 01:43:37.770527 containerd[1496]: time="2026-01-20T01:43:37.770245649Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:37.773149 containerd[1496]: time="2026-01-20T01:43:37.772862933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:43:37.773149 containerd[1496]: time="2026-01-20T01:43:37.772993236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:37.775911 kubelet[2689]: E0120 01:43:37.774029 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:37.775911 kubelet[2689]: E0120 01:43:37.774202 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:37.775911 kubelet[2689]: E0120 01:43:37.774585 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:37.776458 kubelet[2689]: E0120 01:43:37.776399 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:43:38.104325 systemd-networkd[1434]: vxlan.calico: Link UP Jan 20 01:43:38.104337 systemd-networkd[1434]: vxlan.calico: Gained carrier Jan 20 01:43:38.245087 kubelet[2689]: E0120 01:43:38.243377 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:43:38.245087 kubelet[2689]: E0120 01:43:38.243731 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:43:38.254923 kubelet[2689]: E0120 01:43:38.254822 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:43:39.263109 systemd-networkd[1434]: vxlan.calico: Gained IPv6LL Jan 20 01:43:39.632487 systemd[1]: Started sshd@12-10.230.30.54:22-164.92.217.44:43724.service - OpenSSH per-connection server daemon (164.92.217.44:43724). Jan 20 01:43:39.874295 sshd[4888]: Invalid user oracle from 164.92.217.44 port 43724 Jan 20 01:43:39.932040 sshd[4888]: Connection closed by invalid user oracle 164.92.217.44 port 43724 [preauth] Jan 20 01:43:39.934610 systemd[1]: sshd@12-10.230.30.54:22-164.92.217.44:43724.service: Deactivated successfully. Jan 20 01:43:41.721904 containerd[1496]: time="2026-01-20T01:43:41.719319101Z" level=info msg="StopPodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\"" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.819 [WARNING][4904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f445973-85d0-4221-8af9-3dc0c3aa4878", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56", Pod:"goldmane-666569f655-kt727", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d157e1114b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.820 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.820 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" iface="eth0" netns="" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.820 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.820 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.908 [INFO][4911] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.910 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.910 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.932 [WARNING][4911] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.933 [INFO][4911] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.940 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:41.947450 containerd[1496]: 2026-01-20 01:43:41.943 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:41.947450 containerd[1496]: time="2026-01-20T01:43:41.947093550Z" level=info msg="TearDown network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" successfully" Jan 20 01:43:41.947450 containerd[1496]: time="2026-01-20T01:43:41.947144867Z" level=info msg="StopPodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" returns successfully" Jan 20 01:43:41.949940 containerd[1496]: time="2026-01-20T01:43:41.949742432Z" level=info msg="RemovePodSandbox for \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\"" Jan 20 01:43:41.949940 containerd[1496]: time="2026-01-20T01:43:41.949819532Z" level=info msg="Forcibly stopping sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\"" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.069 [WARNING][4925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7f445973-85d0-4221-8af9-3dc0c3aa4878", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"ead8671977698a7690234f2c0b4e74f1f6b68fe0b632fd05b889fc928241ec56", Pod:"goldmane-666569f655-kt727", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d157e1114b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.070 [INFO][4925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.070 [INFO][4925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" iface="eth0" netns="" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.070 [INFO][4925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.070 [INFO][4925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.168 [INFO][4932] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.170 [INFO][4932] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.170 [INFO][4932] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.195 [WARNING][4932] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.196 [INFO][4932] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" HandleID="k8s-pod-network.9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Workload="srv--vpmg3.gb1.brightbox.com-k8s-goldmane--666569f655--kt727-eth0" Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.201 [INFO][4932] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:42.217601 containerd[1496]: 2026-01-20 01:43:42.210 [INFO][4925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70" Jan 20 01:43:42.220544 containerd[1496]: time="2026-01-20T01:43:42.219463754Z" level=info msg="TearDown network for sandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" successfully" Jan 20 01:43:42.244811 containerd[1496]: time="2026-01-20T01:43:42.244713425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:42.245144 containerd[1496]: time="2026-01-20T01:43:42.244910982Z" level=info msg="RemovePodSandbox \"9c1e4a78f23945b95b475fd1005ce20e3611da4e261b4b605ee95450699b5f70\" returns successfully" Jan 20 01:43:42.249890 containerd[1496]: time="2026-01-20T01:43:42.247383098Z" level=info msg="StopPodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\"" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.326 [WARNING][4947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"896c437d-0a8d-496f-a420-742c93e0d6a2", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6", Pod:"coredns-668d6bf9bc-jd9dv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb89d6df6bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.327 [INFO][4947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.327 [INFO][4947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" iface="eth0" netns="" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.327 [INFO][4947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.327 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.377 [INFO][4954] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.378 [INFO][4954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.378 [INFO][4954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.389 [WARNING][4954] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.389 [INFO][4954] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.392 [INFO][4954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:42.401413 containerd[1496]: 2026-01-20 01:43:42.396 [INFO][4947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.409034 containerd[1496]: time="2026-01-20T01:43:42.401492907Z" level=info msg="TearDown network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" successfully" Jan 20 01:43:42.409034 containerd[1496]: time="2026-01-20T01:43:42.401545568Z" level=info msg="StopPodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" returns successfully" Jan 20 01:43:42.419161 containerd[1496]: time="2026-01-20T01:43:42.419098294Z" level=info msg="RemovePodSandbox for \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\"" Jan 20 01:43:42.419161 containerd[1496]: time="2026-01-20T01:43:42.419160848Z" level=info msg="Forcibly stopping sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\"" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.491 [WARNING][4969] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"896c437d-0a8d-496f-a420-742c93e0d6a2", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"1a09610ca0b4799cb01002fa05fc45af5cd3e0f10adb005d0ad8c9298b2305a6", Pod:"coredns-668d6bf9bc-jd9dv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieb89d6df6bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.491 [INFO][4969] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.491 [INFO][4969] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" iface="eth0" netns="" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.491 [INFO][4969] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.491 [INFO][4969] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.557 [INFO][4976] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.559 [INFO][4976] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.559 [INFO][4976] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.571 [WARNING][4976] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.572 [INFO][4976] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" HandleID="k8s-pod-network.e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--jd9dv-eth0" Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.574 [INFO][4976] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:42.580928 containerd[1496]: 2026-01-20 01:43:42.577 [INFO][4969] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a" Jan 20 01:43:42.583024 containerd[1496]: time="2026-01-20T01:43:42.580817059Z" level=info msg="TearDown network for sandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" successfully" Jan 20 01:43:42.587635 containerd[1496]: time="2026-01-20T01:43:42.587223062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:42.587635 containerd[1496]: time="2026-01-20T01:43:42.587318735Z" level=info msg="RemovePodSandbox \"e1797bbb79ac5d823970ef058f20644a711081538dba4cb692aed136788c267a\" returns successfully" Jan 20 01:43:42.589085 containerd[1496]: time="2026-01-20T01:43:42.588546983Z" level=info msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.646 [WARNING][4990] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.646 [INFO][4990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.646 [INFO][4990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" iface="eth0" netns="" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.646 [INFO][4990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.646 [INFO][4990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.683 [INFO][4997] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.683 [INFO][4997] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.683 [INFO][4997] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.693 [WARNING][4997] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.693 [INFO][4997] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.696 [INFO][4997] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:42.700799 containerd[1496]: 2026-01-20 01:43:42.698 [INFO][4990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.702325 containerd[1496]: time="2026-01-20T01:43:42.700889390Z" level=info msg="TearDown network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" successfully" Jan 20 01:43:42.702325 containerd[1496]: time="2026-01-20T01:43:42.700933587Z" level=info msg="StopPodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" returns successfully" Jan 20 01:43:42.702957 containerd[1496]: time="2026-01-20T01:43:42.702467919Z" level=info msg="RemovePodSandbox for \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" Jan 20 01:43:42.702957 containerd[1496]: time="2026-01-20T01:43:42.702526955Z" level=info msg="Forcibly stopping sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\"" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.771 [WARNING][5011] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.771 [INFO][5011] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.771 [INFO][5011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" iface="eth0" netns="" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.772 [INFO][5011] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.772 [INFO][5011] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.811 [INFO][5018] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.811 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.811 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.822 [WARNING][5018] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.823 [INFO][5018] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" HandleID="k8s-pod-network.a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Workload="srv--vpmg3.gb1.brightbox.com-k8s-whisker--596775c78f--n9sm2-eth0" Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.826 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:42.832360 containerd[1496]: 2026-01-20 01:43:42.829 [INFO][5011] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42" Jan 20 01:43:42.832360 containerd[1496]: time="2026-01-20T01:43:42.832321240Z" level=info msg="TearDown network for sandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" successfully" Jan 20 01:43:42.846792 containerd[1496]: time="2026-01-20T01:43:42.846726158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:42.847067 containerd[1496]: time="2026-01-20T01:43:42.846810826Z" level=info msg="RemovePodSandbox \"a40d567d31da7597832158fd3b8be8911f0e6462b32c0c310f27d52f07935b42\" returns successfully" Jan 20 01:43:42.848154 containerd[1496]: time="2026-01-20T01:43:42.847665628Z" level=info msg="StopPodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\"" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.906 [WARNING][5033] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0", GenerateName:"calico-kube-controllers-849c94fcc7-", Namespace:"calico-system", SelfLink:"", UID:"eedef20c-6169-4097-90af-4b5ed35e4c70", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849c94fcc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1", Pod:"calico-kube-controllers-849c94fcc7-89lqr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e20fe8338e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.906 [INFO][5033] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.906 [INFO][5033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" iface="eth0" netns="" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.907 [INFO][5033] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.907 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.947 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.948 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.948 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.956 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.956 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.958 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:42.964405 containerd[1496]: 2026-01-20 01:43:42.961 [INFO][5033] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:42.967954 containerd[1496]: time="2026-01-20T01:43:42.965028787Z" level=info msg="TearDown network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" successfully" Jan 20 01:43:42.967954 containerd[1496]: time="2026-01-20T01:43:42.965103644Z" level=info msg="StopPodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" returns successfully" Jan 20 01:43:42.967954 containerd[1496]: time="2026-01-20T01:43:42.967156318Z" level=info msg="RemovePodSandbox for \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\"" Jan 20 01:43:42.967954 containerd[1496]: time="2026-01-20T01:43:42.967203519Z" level=info msg="Forcibly stopping sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\"" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.031 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0", GenerateName:"calico-kube-controllers-849c94fcc7-", Namespace:"calico-system", SelfLink:"", UID:"eedef20c-6169-4097-90af-4b5ed35e4c70", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"849c94fcc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"7e98c0e5d933ae190bd163cfc9cd831123c7c1d8fb05af645db7f88aa3bec6f1", Pod:"calico-kube-controllers-849c94fcc7-89lqr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8e20fe8338e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.032 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.032 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" iface="eth0" netns="" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.032 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.032 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.075 [INFO][5061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.078 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.078 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.090 [WARNING][5061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.090 [INFO][5061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" HandleID="k8s-pod-network.50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--kube--controllers--849c94fcc7--89lqr-eth0" Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.093 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:43.100619 containerd[1496]: 2026-01-20 01:43:43.097 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb" Jan 20 01:43:43.104403 containerd[1496]: time="2026-01-20T01:43:43.100981585Z" level=info msg="TearDown network for sandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" successfully" Jan 20 01:43:43.106705 containerd[1496]: time="2026-01-20T01:43:43.106127736Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:43.106705 containerd[1496]: time="2026-01-20T01:43:43.106192181Z" level=info msg="RemovePodSandbox \"50df1eefebba2a3f80958216af9a7dfd0c3350463aab6291bf7c5c222ff7ccdb\" returns successfully" Jan 20 01:43:43.107275 containerd[1496]: time="2026-01-20T01:43:43.107105683Z" level=info msg="StopPodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\"" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.173 [WARNING][5075] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573ad695-5762-4b18-9450-3954cd6448a6", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562", Pod:"calico-apiserver-799b8f498b-fhvkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafd8a659f75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.173 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.173 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" iface="eth0" netns="" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.173 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.173 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.224 [INFO][5082] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.225 [INFO][5082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.225 [INFO][5082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.234 [WARNING][5082] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.234 [INFO][5082] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.236 [INFO][5082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:43.241277 containerd[1496]: 2026-01-20 01:43:43.238 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.241277 containerd[1496]: time="2026-01-20T01:43:43.241063602Z" level=info msg="TearDown network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" successfully" Jan 20 01:43:43.241277 containerd[1496]: time="2026-01-20T01:43:43.241116390Z" level=info msg="StopPodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" returns successfully" Jan 20 01:43:43.243636 containerd[1496]: time="2026-01-20T01:43:43.243279646Z" level=info msg="RemovePodSandbox for \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\"" Jan 20 01:43:43.243636 containerd[1496]: time="2026-01-20T01:43:43.243343568Z" level=info msg="Forcibly stopping sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\"" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.309 [WARNING][5096] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"573ad695-5762-4b18-9450-3954cd6448a6", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"dfe6e2447f825d585a10132793ee152ebc3bdc52ddf7b9d4ce6a0a9065a47562", Pod:"calico-apiserver-799b8f498b-fhvkc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliafd8a659f75", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.310 [INFO][5096] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.311 [INFO][5096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" iface="eth0" netns="" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.311 [INFO][5096] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.311 [INFO][5096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.350 [INFO][5103] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.350 [INFO][5103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.350 [INFO][5103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.360 [WARNING][5103] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.360 [INFO][5103] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" HandleID="k8s-pod-network.a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--fhvkc-eth0" Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.362 [INFO][5103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:43.368008 containerd[1496]: 2026-01-20 01:43:43.365 [INFO][5096] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5" Jan 20 01:43:43.368008 containerd[1496]: time="2026-01-20T01:43:43.367590704Z" level=info msg="TearDown network for sandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" successfully" Jan 20 01:43:43.372332 containerd[1496]: time="2026-01-20T01:43:43.372293023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:43.372546 containerd[1496]: time="2026-01-20T01:43:43.372362914Z" level=info msg="RemovePodSandbox \"a0fdcaf0b7e83994fadfab7fe56ea4bd7616dc2b2ca9d59c605e3047282cc7f5\" returns successfully" Jan 20 01:43:43.373618 containerd[1496]: time="2026-01-20T01:43:43.373461379Z" level=info msg="StopPodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\"" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.433 [WARNING][5117] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db", Pod:"csi-node-driver-w59jj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43c5a84df5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.434 [INFO][5117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.434 [INFO][5117] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" iface="eth0" netns="" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.434 [INFO][5117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.434 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.467 [INFO][5125] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.467 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.467 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.477 [WARNING][5125] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.477 [INFO][5125] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.480 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:43.484582 containerd[1496]: 2026-01-20 01:43:43.482 [INFO][5117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.486467 containerd[1496]: time="2026-01-20T01:43:43.484640189Z" level=info msg="TearDown network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" successfully" Jan 20 01:43:43.486467 containerd[1496]: time="2026-01-20T01:43:43.484679921Z" level=info msg="StopPodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" returns successfully" Jan 20 01:43:43.486467 containerd[1496]: time="2026-01-20T01:43:43.485736709Z" level=info msg="RemovePodSandbox for \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\"" Jan 20 01:43:43.486467 containerd[1496]: time="2026-01-20T01:43:43.485788880Z" level=info msg="Forcibly stopping sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\"" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.538 [WARNING][5140] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"c7961cb37c3180d53a9f998d4d4b5da9f96eb6e2a6d46496f4d963c9fcd3c4db", Pod:"csi-node-driver-w59jj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali43c5a84df5e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.538 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.539 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" iface="eth0" netns="" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.540 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.540 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.585 [INFO][5147] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.585 [INFO][5147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.585 [INFO][5147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.597 [WARNING][5147] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.598 [INFO][5147] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" HandleID="k8s-pod-network.38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Workload="srv--vpmg3.gb1.brightbox.com-k8s-csi--node--driver--w59jj-eth0" Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.599 [INFO][5147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:43.605806 containerd[1496]: 2026-01-20 01:43:43.602 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27" Jan 20 01:43:43.605806 containerd[1496]: time="2026-01-20T01:43:43.605618401Z" level=info msg="TearDown network for sandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" successfully" Jan 20 01:43:43.611392 containerd[1496]: time="2026-01-20T01:43:43.610991660Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:43:43.611392 containerd[1496]: time="2026-01-20T01:43:43.611052788Z" level=info msg="RemovePodSandbox \"38414ca2589cfe19b110acbfd2a7cb10ae9bab9921901d2d47ea0adf68a99a27\" returns successfully" Jan 20 01:43:44.673132 containerd[1496]: time="2026-01-20T01:43:44.673009821Z" level=info msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.739 [INFO][5170] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.740 [INFO][5170] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" iface="eth0" netns="/var/run/netns/cni-9a65b1fe-bdb3-b73b-5b2c-e8599fc64640" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.740 [INFO][5170] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" iface="eth0" netns="/var/run/netns/cni-9a65b1fe-bdb3-b73b-5b2c-e8599fc64640" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.741 [INFO][5170] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" iface="eth0" netns="/var/run/netns/cni-9a65b1fe-bdb3-b73b-5b2c-e8599fc64640" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.741 [INFO][5170] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.742 [INFO][5170] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.776 [INFO][5178] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.777 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.777 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.787 [WARNING][5178] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.787 [INFO][5178] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.789 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:44.794888 containerd[1496]: 2026-01-20 01:43:44.791 [INFO][5170] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:43:44.799213 containerd[1496]: time="2026-01-20T01:43:44.798898426Z" level=info msg="TearDown network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" successfully" Jan 20 01:43:44.799213 containerd[1496]: time="2026-01-20T01:43:44.798975806Z" level=info msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" returns successfully" Jan 20 01:43:44.801543 systemd[1]: run-netns-cni\x2d9a65b1fe\x2dbdb3\x2db73b\x2d5b2c\x2de8599fc64640.mount: Deactivated successfully. Jan 20 01:43:44.804590 containerd[1496]: time="2026-01-20T01:43:44.801975998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bfff8c98-mt7kn,Uid:5bb26b29-89e1-4055-a3dd-e9f6156c0d75,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:43:44.994429 systemd-networkd[1434]: cali545a1c5c4a7: Link UP Jan 20 01:43:44.994775 systemd-networkd[1434]: cali545a1c5c4a7: Gained carrier Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.883 [INFO][5185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0 calico-apiserver-66bfff8c98- calico-apiserver 5bb26b29-89e1-4055-a3dd-e9f6156c0d75 1097 0 2026-01-20 01:43:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66bfff8c98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com calico-apiserver-66bfff8c98-mt7kn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali545a1c5c4a7 [] [] }} ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.884 [INFO][5185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.925 [INFO][5197] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" HandleID="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.926 [INFO][5197] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" HandleID="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"calico-apiserver-66bfff8c98-mt7kn", "timestamp":"2026-01-20 01:43:44.92591696 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.926 [INFO][5197] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.926 [INFO][5197] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.926 [INFO][5197] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.947 [INFO][5197] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.955 [INFO][5197] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.962 [INFO][5197] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.964 [INFO][5197] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.968 [INFO][5197] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.968 [INFO][5197] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.970 [INFO][5197] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8 Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.975 [INFO][5197] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.984 [INFO][5197] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.135/26] block=192.168.21.128/26 handle="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.984 [INFO][5197] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.135/26] handle="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.984 [INFO][5197] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:45.024636 containerd[1496]: 2026-01-20 01:43:44.985 [INFO][5197] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.135/26] IPv6=[] ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" HandleID="k8s-pod-network.d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.025873 containerd[1496]: 2026-01-20 01:43:44.989 [INFO][5185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0", GenerateName:"calico-apiserver-66bfff8c98-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bb26b29-89e1-4055-a3dd-e9f6156c0d75", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bfff8c98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66bfff8c98-mt7kn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali545a1c5c4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:45.025873 containerd[1496]: 2026-01-20 01:43:44.989 [INFO][5185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.135/32] ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.025873 containerd[1496]: 2026-01-20 01:43:44.990 [INFO][5185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali545a1c5c4a7 ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.025873 containerd[1496]: 2026-01-20 01:43:44.994 [INFO][5185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.025873 containerd[1496]: 2026-01-20 01:43:44.995 [INFO][5185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0", GenerateName:"calico-apiserver-66bfff8c98-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bb26b29-89e1-4055-a3dd-e9f6156c0d75", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bfff8c98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8", Pod:"calico-apiserver-66bfff8c98-mt7kn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali545a1c5c4a7", MAC:"ee:e0:a7:3a:b9:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:45.025873 containerd[1496]: 2026-01-20 01:43:45.015 [INFO][5185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8" Namespace="calico-apiserver" Pod="calico-apiserver-66bfff8c98-mt7kn" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:43:45.087596 containerd[1496]: time="2026-01-20T01:43:45.086079073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:45.087596 containerd[1496]: time="2026-01-20T01:43:45.086239951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:45.087596 containerd[1496]: time="2026-01-20T01:43:45.086279180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:45.087596 containerd[1496]: time="2026-01-20T01:43:45.086482296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:45.131541 systemd[1]: run-containerd-runc-k8s.io-d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8-runc.cIsQH7.mount: Deactivated successfully. Jan 20 01:43:45.145103 systemd[1]: Started cri-containerd-d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8.scope - libcontainer container d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8. Jan 20 01:43:45.215696 containerd[1496]: time="2026-01-20T01:43:45.215564594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66bfff8c98-mt7kn,Uid:5bb26b29-89e1-4055-a3dd-e9f6156c0d75,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8\"" Jan 20 01:43:45.221052 containerd[1496]: time="2026-01-20T01:43:45.220906994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:45.530224 containerd[1496]: time="2026-01-20T01:43:45.530059737Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:45.531679 containerd[1496]: time="2026-01-20T01:43:45.531598752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:45.531911 containerd[1496]: time="2026-01-20T01:43:45.531613937Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:45.532299 kubelet[2689]: E0120 01:43:45.532217 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:45.534159 kubelet[2689]: E0120 01:43:45.532328 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:45.534159 kubelet[2689]: E0120 01:43:45.532600 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h49ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66bfff8c98-mt7kn_calico-apiserver(5bb26b29-89e1-4055-a3dd-e9f6156c0d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:45.534159 kubelet[2689]: E0120 01:43:45.533885 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:43:45.674809 containerd[1496]: time="2026-01-20T01:43:45.674738080Z" level=info msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.748 [INFO][5261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.748 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" iface="eth0" netns="/var/run/netns/cni-9e7b7942-2a4c-2353-b4bc-80fb454c8a60" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.749 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" iface="eth0" netns="/var/run/netns/cni-9e7b7942-2a4c-2353-b4bc-80fb454c8a60" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.750 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" iface="eth0" netns="/var/run/netns/cni-9e7b7942-2a4c-2353-b4bc-80fb454c8a60" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.750 [INFO][5261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.751 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.788 [INFO][5269] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.789 [INFO][5269] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.789 [INFO][5269] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.801 [WARNING][5269] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.801 [INFO][5269] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.803 [INFO][5269] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:45.809438 containerd[1496]: 2026-01-20 01:43:45.805 [INFO][5261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:43:45.812209 containerd[1496]: time="2026-01-20T01:43:45.810373958Z" level=info msg="TearDown network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" successfully" Jan 20 01:43:45.812209 containerd[1496]: time="2026-01-20T01:43:45.810448302Z" level=info msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" returns successfully" Jan 20 01:43:45.811914 systemd[1]: run-netns-cni\x2d9e7b7942\x2d2a4c\x2d2353\x2db4bc\x2d80fb454c8a60.mount: Deactivated successfully. Jan 20 01:43:45.814402 containerd[1496]: time="2026-01-20T01:43:45.814365872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-5jdcb,Uid:63686bdb-630e-4c31-bb10-61a7b178bd09,Namespace:calico-apiserver,Attempt:1,}" Jan 20 01:43:46.004953 systemd-networkd[1434]: cali29ef186aa54: Link UP Jan 20 01:43:46.008342 systemd-networkd[1434]: cali29ef186aa54: Gained carrier Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.890 [INFO][5276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0 calico-apiserver-799b8f498b- calico-apiserver 63686bdb-630e-4c31-bb10-61a7b178bd09 1107 0 2026-01-20 01:42:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799b8f498b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com calico-apiserver-799b8f498b-5jdcb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29ef186aa54 [] [] }} ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.890 [INFO][5276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.928 [INFO][5287] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" HandleID="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.928 [INFO][5287] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" HandleID="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"calico-apiserver-799b8f498b-5jdcb", "timestamp":"2026-01-20 01:43:45.92843065 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.928 [INFO][5287] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.929 [INFO][5287] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.929 [INFO][5287] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.948 [INFO][5287] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.957 [INFO][5287] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.964 [INFO][5287] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.966 [INFO][5287] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.970 [INFO][5287] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.970 [INFO][5287] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.972 [INFO][5287] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160 Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.978 [INFO][5287] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.989 [INFO][5287] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.136/26] block=192.168.21.128/26 handle="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.989 [INFO][5287] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.136/26] handle="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.989 [INFO][5287] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:46.033036 containerd[1496]: 2026-01-20 01:43:45.989 [INFO][5287] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.136/26] IPv6=[] ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" HandleID="k8s-pod-network.3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.034195 containerd[1496]: 2026-01-20 01:43:45.993 [INFO][5276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"63686bdb-630e-4c31-bb10-61a7b178bd09", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-799b8f498b-5jdcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29ef186aa54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:46.034195 containerd[1496]: 2026-01-20 01:43:45.993 [INFO][5276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.136/32] ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.034195 containerd[1496]: 2026-01-20 01:43:45.993 [INFO][5276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29ef186aa54 ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.034195 containerd[1496]: 2026-01-20 01:43:46.002 [INFO][5276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.034195 containerd[1496]: 2026-01-20 01:43:46.007 [INFO][5276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"63686bdb-630e-4c31-bb10-61a7b178bd09", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160", Pod:"calico-apiserver-799b8f498b-5jdcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29ef186aa54", MAC:"0a:27:dd:6e:df:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:46.034195 containerd[1496]: 2026-01-20 01:43:46.024 [INFO][5276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160" Namespace="calico-apiserver" Pod="calico-apiserver-799b8f498b-5jdcb" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:43:46.073669 containerd[1496]: time="2026-01-20T01:43:46.072130659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:46.073669 containerd[1496]: time="2026-01-20T01:43:46.072380817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:46.073669 containerd[1496]: time="2026-01-20T01:43:46.072465649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:46.077416 containerd[1496]: time="2026-01-20T01:43:46.076925316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:46.123091 systemd[1]: Started cri-containerd-3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160.scope - libcontainer container 3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160. Jan 20 01:43:46.186252 containerd[1496]: time="2026-01-20T01:43:46.186176099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799b8f498b-5jdcb,Uid:63686bdb-630e-4c31-bb10-61a7b178bd09,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160\"" Jan 20 01:43:46.188909 containerd[1496]: time="2026-01-20T01:43:46.188775022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:46.292314 kubelet[2689]: E0120 01:43:46.292257 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:43:46.495219 systemd-networkd[1434]: cali545a1c5c4a7: Gained IPv6LL Jan 20 01:43:46.502675 containerd[1496]: time="2026-01-20T01:43:46.502609216Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:46.504228 containerd[1496]: time="2026-01-20T01:43:46.504121399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:46.504346 containerd[1496]: time="2026-01-20T01:43:46.504145514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:46.504859 kubelet[2689]: E0120 01:43:46.504745 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:46.504964 kubelet[2689]: E0120 01:43:46.504930 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:46.507423 kubelet[2689]: E0120 01:43:46.505601 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jd9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-5jdcb_calico-apiserver(63686bdb-630e-4c31-bb10-61a7b178bd09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:46.509006 kubelet[2689]: E0120 01:43:46.508892 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:43:46.674770 containerd[1496]: time="2026-01-20T01:43:46.674702840Z" level=info msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" Jan 20 01:43:46.677569 containerd[1496]: time="2026-01-20T01:43:46.677505522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.779 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.779 [INFO][5355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" iface="eth0" netns="/var/run/netns/cni-da9b176e-b96b-aec2-83be-337cca73d91f" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.780 [INFO][5355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" iface="eth0" netns="/var/run/netns/cni-da9b176e-b96b-aec2-83be-337cca73d91f" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.780 [INFO][5355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" iface="eth0" netns="/var/run/netns/cni-da9b176e-b96b-aec2-83be-337cca73d91f" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.780 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.780 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.834 [INFO][5362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.835 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.836 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.847 [WARNING][5362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.848 [INFO][5362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.849 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:46.857762 containerd[1496]: 2026-01-20 01:43:46.852 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:43:46.861402 containerd[1496]: time="2026-01-20T01:43:46.859161531Z" level=info msg="TearDown network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" successfully" Jan 20 01:43:46.861402 containerd[1496]: time="2026-01-20T01:43:46.859240154Z" level=info msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" returns successfully" Jan 20 01:43:46.865357 containerd[1496]: time="2026-01-20T01:43:46.863163010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gjtls,Uid:94aa1e8b-d364-40d2-9c05-39e890317a94,Namespace:kube-system,Attempt:1,}" Jan 20 01:43:46.867458 systemd[1]: run-netns-cni\x2dda9b176e\x2db96b\x2daec2\x2d83be\x2d337cca73d91f.mount: Deactivated successfully. Jan 20 01:43:47.005691 containerd[1496]: time="2026-01-20T01:43:47.005571885Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:47.009193 containerd[1496]: time="2026-01-20T01:43:47.007538441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:43:47.009193 containerd[1496]: time="2026-01-20T01:43:47.007669833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:43:47.009442 kubelet[2689]: E0120 01:43:47.007961 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:47.009442 kubelet[2689]: E0120 01:43:47.008099 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:43:47.009442 kubelet[2689]: E0120 01:43:47.009085 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:47.024278 containerd[1496]: time="2026-01-20T01:43:47.022472777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:43:47.098208 systemd-networkd[1434]: calie99c059d2b3: Link UP Jan 20 01:43:47.100113 systemd-networkd[1434]: calie99c059d2b3: Gained carrier Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:46.962 [INFO][5368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0 coredns-668d6bf9bc- kube-system 94aa1e8b-d364-40d2-9c05-39e890317a94 1125 0 2026-01-20 01:42:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-vpmg3.gb1.brightbox.com coredns-668d6bf9bc-gjtls eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie99c059d2b3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:46.963 [INFO][5368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.010 [INFO][5381] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" HandleID="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.010 [INFO][5381] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" HandleID="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac7e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-vpmg3.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-gjtls", "timestamp":"2026-01-20 01:43:47.010006668 +0000 UTC"}, Hostname:"srv-vpmg3.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.011 [INFO][5381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.011 [INFO][5381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.011 [INFO][5381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-vpmg3.gb1.brightbox.com' Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.045 [INFO][5381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.054 [INFO][5381] ipam/ipam.go 394: Looking up existing affinities for host host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.063 [INFO][5381] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.066 [INFO][5381] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.070 [INFO][5381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.070 [INFO][5381] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.072 [INFO][5381] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0 Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.078 [INFO][5381] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.086 [INFO][5381] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.137/26] block=192.168.21.128/26 handle="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.086 [INFO][5381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.137/26] handle="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" host="srv-vpmg3.gb1.brightbox.com" Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.086 [INFO][5381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:43:47.128588 containerd[1496]: 2026-01-20 01:43:47.087 [INFO][5381] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.137/26] IPv6=[] ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" HandleID="k8s-pod-network.33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.129788 containerd[1496]: 2026-01-20 01:43:47.091 [INFO][5368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94aa1e8b-d364-40d2-9c05-39e890317a94", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-gjtls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie99c059d2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:47.129788 containerd[1496]: 2026-01-20 01:43:47.091 [INFO][5368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.137/32] ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.129788 containerd[1496]: 2026-01-20 01:43:47.091 [INFO][5368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie99c059d2b3 ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.129788 containerd[1496]: 2026-01-20 01:43:47.101 [INFO][5368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.129788 containerd[1496]: 2026-01-20 01:43:47.101 [INFO][5368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94aa1e8b-d364-40d2-9c05-39e890317a94", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0", Pod:"coredns-668d6bf9bc-gjtls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie99c059d2b3", MAC:"ea:db:0e:86:49:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:43:47.129788 containerd[1496]: 2026-01-20 01:43:47.122 [INFO][5368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0" Namespace="kube-system" Pod="coredns-668d6bf9bc-gjtls" WorkloadEndpoint="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:43:47.162253 containerd[1496]: time="2026-01-20T01:43:47.161916717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 20 01:43:47.162253 containerd[1496]: time="2026-01-20T01:43:47.162002377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 20 01:43:47.162506 containerd[1496]: time="2026-01-20T01:43:47.162033234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:47.162506 containerd[1496]: time="2026-01-20T01:43:47.162179578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 20 01:43:47.219363 systemd[1]: Started cri-containerd-33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0.scope - libcontainer container 33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0. Jan 20 01:43:47.278123 containerd[1496]: time="2026-01-20T01:43:47.277905737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gjtls,Uid:94aa1e8b-d364-40d2-9c05-39e890317a94,Namespace:kube-system,Attempt:1,} returns sandbox id \"33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0\"" Jan 20 01:43:47.284857 containerd[1496]: time="2026-01-20T01:43:47.284306347Z" level=info msg="CreateContainer within sandbox \"33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 01:43:47.302613 kubelet[2689]: E0120 01:43:47.302539 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:43:47.310651 containerd[1496]: time="2026-01-20T01:43:47.310592296Z" level=info msg="CreateContainer within sandbox \"33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4fc64bdb569e3a64ecf2bb7fc366a7d5643c43085958001d7316823d2dcfa521\"" Jan 20 01:43:47.313125 containerd[1496]: time="2026-01-20T01:43:47.312191180Z" level=info msg="StartContainer for \"4fc64bdb569e3a64ecf2bb7fc366a7d5643c43085958001d7316823d2dcfa521\"" Jan 20 01:43:47.341535 containerd[1496]: time="2026-01-20T01:43:47.341478273Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:47.342811 containerd[1496]: time="2026-01-20T01:43:47.342752565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:43:47.343134 containerd[1496]: time="2026-01-20T01:43:47.343038044Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:43:47.345897 kubelet[2689]: E0120 01:43:47.343423 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:47.345897 kubelet[2689]: E0120 01:43:47.343533 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:43:47.345897 kubelet[2689]: E0120 01:43:47.344373 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:47.345897 kubelet[2689]: E0120 01:43:47.345607 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:43:47.396084 systemd[1]: Started cri-containerd-4fc64bdb569e3a64ecf2bb7fc366a7d5643c43085958001d7316823d2dcfa521.scope - libcontainer container 4fc64bdb569e3a64ecf2bb7fc366a7d5643c43085958001d7316823d2dcfa521. Jan 20 01:43:47.457307 containerd[1496]: time="2026-01-20T01:43:47.457247256Z" level=info msg="StartContainer for \"4fc64bdb569e3a64ecf2bb7fc366a7d5643c43085958001d7316823d2dcfa521\" returns successfully" Jan 20 01:43:47.583096 systemd-networkd[1434]: cali29ef186aa54: Gained IPv6LL Jan 20 01:43:47.863260 systemd[1]: run-containerd-runc-k8s.io-33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0-runc.yUhB2P.mount: Deactivated successfully. Jan 20 01:43:48.331686 kubelet[2689]: I0120 01:43:48.330314 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gjtls" podStartSLOduration=61.330243779 podStartE2EDuration="1m1.330243779s" podCreationTimestamp="2026-01-20 01:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 01:43:48.329759712 +0000 UTC m=+66.856187111" watchObservedRunningTime="2026-01-20 01:43:48.330243779 +0000 UTC m=+66.856671169" Jan 20 01:43:48.479235 systemd-networkd[1434]: calie99c059d2b3: Gained IPv6LL Jan 20 01:43:50.678509 containerd[1496]: time="2026-01-20T01:43:50.677180960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:43:50.996600 containerd[1496]: time="2026-01-20T01:43:50.996354997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:50.997742 containerd[1496]: time="2026-01-20T01:43:50.997684944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:43:50.998050 containerd[1496]: time="2026-01-20T01:43:50.997814447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:43:50.998167 kubelet[2689]: E0120 01:43:50.998100 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:51.000035 kubelet[2689]: E0120 01:43:50.998187 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:43:51.000035 kubelet[2689]: E0120 01:43:50.998595 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a4b17c258084135abe35c802ee47f41,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:51.000960 containerd[1496]: time="2026-01-20T01:43:51.000641001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:43:51.306331 containerd[1496]: time="2026-01-20T01:43:51.306259263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:51.307917 containerd[1496]: time="2026-01-20T01:43:51.307642092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:43:51.307917 containerd[1496]: time="2026-01-20T01:43:51.307665589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:51.308368 kubelet[2689]: E0120 01:43:51.308293 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:51.308481 kubelet[2689]: E0120 01:43:51.308387 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:43:51.308923 kubelet[2689]: E0120 01:43:51.308778 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zx2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kt727_calico-system(7f445973-85d0-4221-8af9-3dc0c3aa4878): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:51.310616 kubelet[2689]: E0120 01:43:51.309997 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:43:51.310908 containerd[1496]: time="2026-01-20T01:43:51.310140873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:43:51.628985 containerd[1496]: time="2026-01-20T01:43:51.628760516Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:51.630746 containerd[1496]: time="2026-01-20T01:43:51.630697168Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:43:51.630887 containerd[1496]: time="2026-01-20T01:43:51.630814865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:43:51.631174 kubelet[2689]: E0120 01:43:51.631120 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:51.631293 kubelet[2689]: E0120 01:43:51.631197 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:43:51.631956 kubelet[2689]: E0120 01:43:51.631599 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44szk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-fhvkc_calico-apiserver(573ad695-5762-4b18-9450-3954cd6448a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:51.632359 containerd[1496]: time="2026-01-20T01:43:51.631642811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:43:51.632757 kubelet[2689]: E0120 01:43:51.632720 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:43:51.940209 containerd[1496]: time="2026-01-20T01:43:51.939924357Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:51.941633 containerd[1496]: time="2026-01-20T01:43:51.941503731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:43:51.941818 containerd[1496]: time="2026-01-20T01:43:51.941523091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:51.942032 kubelet[2689]: E0120 01:43:51.941945 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:51.942873 kubelet[2689]: E0120 01:43:51.942049 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:43:51.942954 containerd[1496]: time="2026-01-20T01:43:51.942543287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:43:51.943511 kubelet[2689]: E0120 01:43:51.943375 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:51.945678 kubelet[2689]: E0120 01:43:51.945329 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:43:52.255755 containerd[1496]: time="2026-01-20T01:43:52.254640264Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:43:52.256626 containerd[1496]: time="2026-01-20T01:43:52.256582375Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:43:52.256909 containerd[1496]: time="2026-01-20T01:43:52.256659661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:43:52.257267 kubelet[2689]: E0120 01:43:52.257101 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:52.257267 kubelet[2689]: E0120 01:43:52.257186 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:43:52.257781 kubelet[2689]: E0120 01:43:52.257385 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttbj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849c94fcc7-89lqr_calico-system(eedef20c-6169-4097-90af-4b5ed35e4c70): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:43:52.260332 kubelet[2689]: E0120 01:43:52.260240 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:43:59.680160 kubelet[2689]: E0120 01:43:59.679915 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:44:00.675581 containerd[1496]: time="2026-01-20T01:44:00.675436275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:01.008874 containerd[1496]: time="2026-01-20T01:44:01.007772481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:01.010433 containerd[1496]: time="2026-01-20T01:44:01.010348774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:01.010730 containerd[1496]: time="2026-01-20T01:44:01.010588350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:01.011269 kubelet[2689]: E0120 01:44:01.011171 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:01.011814 kubelet[2689]: E0120 01:44:01.011287 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:01.011814 kubelet[2689]: E0120 01:44:01.011545 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jd9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-5jdcb_calico-apiserver(63686bdb-630e-4c31-bb10-61a7b178bd09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:01.013342 kubelet[2689]: E0120 01:44:01.013256 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:44:01.676346 containerd[1496]: time="2026-01-20T01:44:01.676223639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:01.993293 containerd[1496]: time="2026-01-20T01:44:01.993061696Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:01.994361 containerd[1496]: time="2026-01-20T01:44:01.994303601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:01.994645 containerd[1496]: time="2026-01-20T01:44:01.994437323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:01.994742 kubelet[2689]: E0120 01:44:01.994678 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:01.994870 kubelet[2689]: E0120 01:44:01.994758 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:01.995077 kubelet[2689]: E0120 01:44:01.994990 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h49ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66bfff8c98-mt7kn_calico-apiserver(5bb26b29-89e1-4055-a3dd-e9f6156c0d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:01.996824 kubelet[2689]: E0120 01:44:01.996748 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:44:03.681866 kubelet[2689]: E0120 01:44:03.681759 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:44:03.683443 kubelet[2689]: E0120 01:44:03.682300 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:44:03.683443 kubelet[2689]: E0120 01:44:03.682997 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:44:04.355752 systemd[1]: Started sshd@13-10.230.30.54:22-20.161.92.111:53658.service - OpenSSH per-connection server daemon (20.161.92.111:53658). Jan 20 01:44:04.981996 sshd[5509]: Accepted publickey for core from 20.161.92.111 port 53658 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:04.985264 sshd[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:05.001988 systemd-logind[1487]: New session 12 of user core. Jan 20 01:44:05.010504 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 01:44:05.701916 kubelet[2689]: E0120 01:44:05.699592 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:44:05.749794 systemd[1]: run-containerd-runc-k8s.io-0adf4efeae3109a3a84258f1ee2511f61196ee44c0db1dc428bac4dd6854a1bc-runc.qfE8He.mount: Deactivated successfully. Jan 20 01:44:06.122722 sshd[5509]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:06.130389 systemd[1]: sshd@13-10.230.30.54:22-20.161.92.111:53658.service: Deactivated successfully. Jan 20 01:44:06.136740 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 01:44:06.139043 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Jan 20 01:44:06.140917 systemd-logind[1487]: Removed session 12. Jan 20 01:44:11.229279 systemd[1]: Started sshd@14-10.230.30.54:22-20.161.92.111:53662.service - OpenSSH per-connection server daemon (20.161.92.111:53662). Jan 20 01:44:11.681873 kubelet[2689]: E0120 01:44:11.681403 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:44:11.863863 sshd[5549]: Accepted publickey for core from 20.161.92.111 port 53662 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:11.869574 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:11.878446 systemd-logind[1487]: New session 13 of user core. Jan 20 01:44:11.887067 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 01:44:12.590242 sshd[5549]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:12.595850 systemd[1]: sshd@14-10.230.30.54:22-20.161.92.111:53662.service: Deactivated successfully. Jan 20 01:44:12.598974 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 01:44:12.600126 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Jan 20 01:44:12.602036 systemd-logind[1487]: Removed session 13. Jan 20 01:44:12.676933 containerd[1496]: time="2026-01-20T01:44:12.675724121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:44:13.017484 containerd[1496]: time="2026-01-20T01:44:13.017252428Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:13.019862 containerd[1496]: time="2026-01-20T01:44:13.019779805Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:44:13.021468 containerd[1496]: time="2026-01-20T01:44:13.019866046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:44:13.021561 kubelet[2689]: E0120 01:44:13.020332 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:44:13.021561 kubelet[2689]: E0120 01:44:13.020464 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:44:13.021561 kubelet[2689]: E0120 01:44:13.020746 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:13.033042 containerd[1496]: time="2026-01-20T01:44:13.032991750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:44:13.341627 containerd[1496]: time="2026-01-20T01:44:13.341520847Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:13.343235 containerd[1496]: time="2026-01-20T01:44:13.343166219Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:44:13.343686 containerd[1496]: time="2026-01-20T01:44:13.343324541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:44:13.343781 kubelet[2689]: E0120 01:44:13.343558 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:44:13.343781 kubelet[2689]: E0120 01:44:13.343627 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:44:13.343951 kubelet[2689]: E0120 01:44:13.343803 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:13.345362 kubelet[2689]: E0120 01:44:13.345307 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:44:13.745075 systemd[1]: Started sshd@15-10.230.30.54:22-164.92.217.44:60086.service - OpenSSH per-connection server daemon (164.92.217.44:60086). Jan 20 01:44:13.879580 sshd[5565]: Invalid user oracle from 164.92.217.44 port 60086 Jan 20 01:44:13.896989 sshd[5565]: Connection closed by invalid user oracle 164.92.217.44 port 60086 [preauth] Jan 20 01:44:13.899678 systemd[1]: sshd@15-10.230.30.54:22-164.92.217.44:60086.service: Deactivated successfully. Jan 20 01:44:14.675323 kubelet[2689]: E0120 01:44:14.674954 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:44:14.677148 containerd[1496]: time="2026-01-20T01:44:14.677048827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:44:14.990223 containerd[1496]: time="2026-01-20T01:44:14.990032378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:14.991316 containerd[1496]: time="2026-01-20T01:44:14.991276347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:44:14.991484 containerd[1496]: time="2026-01-20T01:44:14.991378289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:44:14.991750 kubelet[2689]: E0120 01:44:14.991645 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:44:14.991750 kubelet[2689]: E0120 01:44:14.991710 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:44:14.992512 kubelet[2689]: E0120 01:44:14.991946 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a4b17c258084135abe35c802ee47f41,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:14.995385 containerd[1496]: time="2026-01-20T01:44:14.994962625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:44:15.313927 containerd[1496]: time="2026-01-20T01:44:15.313826443Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:15.315177 containerd[1496]: time="2026-01-20T01:44:15.315047314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:44:15.315177 containerd[1496]: time="2026-01-20T01:44:15.315100148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:15.315463 kubelet[2689]: E0120 01:44:15.315334 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:44:15.315568 kubelet[2689]: E0120 01:44:15.315434 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:44:15.315714 kubelet[2689]: E0120 01:44:15.315629 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:15.317653 kubelet[2689]: E0120 01:44:15.317473 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:44:15.675262 containerd[1496]: time="2026-01-20T01:44:15.674897503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:44:15.986809 containerd[1496]: time="2026-01-20T01:44:15.986570926Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:15.987971 containerd[1496]: time="2026-01-20T01:44:15.987906878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:44:15.988136 containerd[1496]: time="2026-01-20T01:44:15.988070477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:15.988466 kubelet[2689]: E0120 01:44:15.988375 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:15.988900 kubelet[2689]: E0120 01:44:15.988478 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:15.988900 kubelet[2689]: E0120 01:44:15.988671 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttbj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849c94fcc7-89lqr_calico-system(eedef20c-6169-4097-90af-4b5ed35e4c70): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:15.990769 kubelet[2689]: E0120 01:44:15.990160 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:44:17.697292 systemd[1]: Started sshd@16-10.230.30.54:22-20.161.92.111:37680.service - OpenSSH per-connection server daemon (20.161.92.111:37680). Jan 20 01:44:18.278065 sshd[5571]: Accepted publickey for core from 20.161.92.111 port 37680 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:18.280447 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:18.288360 systemd-logind[1487]: New session 14 of user core. Jan 20 01:44:18.294077 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 01:44:18.680623 containerd[1496]: time="2026-01-20T01:44:18.680505418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:44:18.823352 sshd[5571]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:18.829639 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Jan 20 01:44:18.831267 systemd[1]: sshd@16-10.230.30.54:22-20.161.92.111:37680.service: Deactivated successfully. Jan 20 01:44:18.834672 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 01:44:18.836190 systemd-logind[1487]: Removed session 14. Jan 20 01:44:19.015112 containerd[1496]: time="2026-01-20T01:44:19.014809212Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:19.017018 containerd[1496]: time="2026-01-20T01:44:19.016864646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:44:19.017018 containerd[1496]: time="2026-01-20T01:44:19.016938282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:19.017373 kubelet[2689]: E0120 01:44:19.017263 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:44:19.017373 kubelet[2689]: E0120 01:44:19.017353 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:44:19.019265 kubelet[2689]: E0120 01:44:19.018285 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zx2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kt727_calico-system(7f445973-85d0-4221-8af9-3dc0c3aa4878): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:19.019513 containerd[1496]: time="2026-01-20T01:44:19.017790709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:19.019999 kubelet[2689]: E0120 01:44:19.019787 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:44:19.327372 containerd[1496]: time="2026-01-20T01:44:19.327288088Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:19.328772 containerd[1496]: time="2026-01-20T01:44:19.328427535Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:19.328772 containerd[1496]: time="2026-01-20T01:44:19.328477902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:19.329104 kubelet[2689]: E0120 01:44:19.329035 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:19.329224 kubelet[2689]: E0120 01:44:19.329115 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:19.329392 kubelet[2689]: E0120 01:44:19.329313 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44szk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-fhvkc_calico-apiserver(573ad695-5762-4b18-9450-3954cd6448a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:19.331270 kubelet[2689]: E0120 01:44:19.331018 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:44:22.675652 containerd[1496]: time="2026-01-20T01:44:22.675101896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:22.985730 containerd[1496]: time="2026-01-20T01:44:22.985153340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:22.987506 containerd[1496]: time="2026-01-20T01:44:22.987200950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:22.987506 containerd[1496]: time="2026-01-20T01:44:22.987396148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:22.987817 kubelet[2689]: E0120 01:44:22.987734 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:22.990116 kubelet[2689]: E0120 01:44:22.987857 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:22.990116 kubelet[2689]: E0120 01:44:22.988242 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jd9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-5jdcb_calico-apiserver(63686bdb-630e-4c31-bb10-61a7b178bd09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:22.990116 kubelet[2689]: E0120 01:44:22.989902 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:44:23.693885 kubelet[2689]: E0120 01:44:23.693156 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:44:23.932326 systemd[1]: Started sshd@17-10.230.30.54:22-20.161.92.111:54576.service - OpenSSH per-connection server daemon (20.161.92.111:54576). Jan 20 01:44:24.510967 sshd[5592]: Accepted publickey for core from 20.161.92.111 port 54576 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:24.514085 sshd[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:24.525532 systemd-logind[1487]: New session 15 of user core. Jan 20 01:44:24.531161 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 01:44:25.027320 sshd[5592]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:25.033059 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Jan 20 01:44:25.033728 systemd[1]: sshd@17-10.230.30.54:22-20.161.92.111:54576.service: Deactivated successfully. Jan 20 01:44:25.037821 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 01:44:25.040982 systemd-logind[1487]: Removed session 15. Jan 20 01:44:25.134321 systemd[1]: Started sshd@18-10.230.30.54:22-20.161.92.111:54592.service - OpenSSH per-connection server daemon (20.161.92.111:54592). Jan 20 01:44:25.678205 containerd[1496]: time="2026-01-20T01:44:25.678103610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:44:25.748762 sshd[5606]: Accepted publickey for core from 20.161.92.111 port 54592 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:25.751584 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:25.759967 systemd-logind[1487]: New session 16 of user core. Jan 20 01:44:25.768054 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 01:44:26.011598 containerd[1496]: time="2026-01-20T01:44:26.011294514Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:26.012821 containerd[1496]: time="2026-01-20T01:44:26.012749030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:44:26.013062 containerd[1496]: time="2026-01-20T01:44:26.012968305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:44:26.013863 kubelet[2689]: E0120 01:44:26.013576 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:26.013863 kubelet[2689]: E0120 01:44:26.013690 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:44:26.015221 kubelet[2689]: E0120 01:44:26.014522 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h49ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66bfff8c98-mt7kn_calico-apiserver(5bb26b29-89e1-4055-a3dd-e9f6156c0d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:26.016018 kubelet[2689]: E0120 01:44:26.015948 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:44:26.350238 sshd[5606]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:26.356905 systemd[1]: sshd@18-10.230.30.54:22-20.161.92.111:54592.service: Deactivated successfully. Jan 20 01:44:26.360227 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 01:44:26.361766 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Jan 20 01:44:26.363354 systemd-logind[1487]: Removed session 16. Jan 20 01:44:26.457199 systemd[1]: Started sshd@19-10.230.30.54:22-20.161.92.111:54596.service - OpenSSH per-connection server daemon (20.161.92.111:54596). Jan 20 01:44:26.684494 kubelet[2689]: E0120 01:44:26.683777 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:44:27.045931 sshd[5617]: Accepted publickey for core from 20.161.92.111 port 54596 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:27.048721 sshd[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:27.057045 systemd-logind[1487]: New session 17 of user core. Jan 20 01:44:27.063106 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 01:44:27.573965 sshd[5617]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:27.581501 systemd[1]: sshd@19-10.230.30.54:22-20.161.92.111:54596.service: Deactivated successfully. Jan 20 01:44:27.587300 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 01:44:27.588879 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Jan 20 01:44:27.590567 systemd-logind[1487]: Removed session 17. Jan 20 01:44:28.674223 kubelet[2689]: E0120 01:44:28.674121 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:44:30.675812 kubelet[2689]: E0120 01:44:30.675443 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:44:32.692012 systemd[1]: Started sshd@20-10.230.30.54:22-20.161.92.111:58512.service - OpenSSH per-connection server daemon (20.161.92.111:58512). Jan 20 01:44:33.284672 sshd[5637]: Accepted publickey for core from 20.161.92.111 port 58512 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:33.287712 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:33.297141 systemd-logind[1487]: New session 18 of user core. Jan 20 01:44:33.304235 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 01:44:33.678449 kubelet[2689]: E0120 01:44:33.678350 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:44:33.839464 sshd[5637]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:33.847109 systemd[1]: sshd@20-10.230.30.54:22-20.161.92.111:58512.service: Deactivated successfully. Jan 20 01:44:33.850445 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 01:44:33.851713 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Jan 20 01:44:33.853951 systemd-logind[1487]: Removed session 18. Jan 20 01:44:35.683420 kubelet[2689]: E0120 01:44:35.682827 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:44:37.676817 kubelet[2689]: E0120 01:44:37.676688 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:44:38.678538 kubelet[2689]: E0120 01:44:38.677507 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:44:38.678538 kubelet[2689]: E0120 01:44:38.678060 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:44:38.947265 systemd[1]: Started sshd@21-10.230.30.54:22-20.161.92.111:58522.service - OpenSSH per-connection server daemon (20.161.92.111:58522). Jan 20 01:44:39.533610 sshd[5671]: Accepted publickey for core from 20.161.92.111 port 58522 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:39.536210 sshd[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:39.546555 systemd-logind[1487]: New session 19 of user core. Jan 20 01:44:39.552069 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 01:44:40.044113 sshd[5671]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:40.050054 systemd[1]: sshd@21-10.230.30.54:22-20.161.92.111:58522.service: Deactivated successfully. Jan 20 01:44:40.054608 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 01:44:40.056563 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Jan 20 01:44:40.059478 systemd-logind[1487]: Removed session 19. Jan 20 01:44:43.618985 containerd[1496]: time="2026-01-20T01:44:43.618383562Z" level=info msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" Jan 20 01:44:43.680865 kubelet[2689]: E0120 01:44:43.678326 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.758 [WARNING][5693] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0", GenerateName:"calico-apiserver-66bfff8c98-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bb26b29-89e1-4055-a3dd-e9f6156c0d75", ResourceVersion:"1478", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bfff8c98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8", Pod:"calico-apiserver-66bfff8c98-mt7kn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali545a1c5c4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.759 [INFO][5693] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.759 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" iface="eth0" netns="" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.760 [INFO][5693] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.760 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.807 [INFO][5700] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.808 [INFO][5700] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.808 [INFO][5700] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.820 [WARNING][5700] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.820 [INFO][5700] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.822 [INFO][5700] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:44:43.830964 containerd[1496]: 2026-01-20 01:44:43.827 [INFO][5693] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.833107 containerd[1496]: time="2026-01-20T01:44:43.831023387Z" level=info msg="TearDown network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" successfully" Jan 20 01:44:43.833107 containerd[1496]: time="2026-01-20T01:44:43.831063471Z" level=info msg="StopPodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" returns successfully" Jan 20 01:44:43.833107 containerd[1496]: time="2026-01-20T01:44:43.831875844Z" level=info msg="RemovePodSandbox for \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" Jan 20 01:44:43.833107 containerd[1496]: time="2026-01-20T01:44:43.831938093Z" level=info msg="Forcibly stopping sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\"" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.883 [WARNING][5714] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0", GenerateName:"calico-apiserver-66bfff8c98-", Namespace:"calico-apiserver", SelfLink:"", UID:"5bb26b29-89e1-4055-a3dd-e9f6156c0d75", ResourceVersion:"1478", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 43, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66bfff8c98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"d70f2c11fef5f6c2be5ca24354466ea9cc8e1698eb2bba814e715d16bcc1f7e8", Pod:"calico-apiserver-66bfff8c98-mt7kn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali545a1c5c4a7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.883 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.883 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" iface="eth0" netns="" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.883 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.883 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.916 [INFO][5721] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.916 [INFO][5721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.916 [INFO][5721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.925 [WARNING][5721] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.925 [INFO][5721] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" HandleID="k8s-pod-network.8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--66bfff8c98--mt7kn-eth0" Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.927 [INFO][5721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:44:43.932097 containerd[1496]: 2026-01-20 01:44:43.929 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671" Jan 20 01:44:43.935453 containerd[1496]: time="2026-01-20T01:44:43.933577352Z" level=info msg="TearDown network for sandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" successfully" Jan 20 01:44:43.938417 containerd[1496]: time="2026-01-20T01:44:43.938345051Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:44:43.939111 containerd[1496]: time="2026-01-20T01:44:43.938420136Z" level=info msg="RemovePodSandbox \"8c3c34b7bb8d5b6e1d4fee07479850fe22abdc87f5833f4cdbae7406f278a671\" returns successfully" Jan 20 01:44:43.939203 containerd[1496]: time="2026-01-20T01:44:43.939153259Z" level=info msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:43.991 [WARNING][5735] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94aa1e8b-d364-40d2-9c05-39e890317a94", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0", Pod:"coredns-668d6bf9bc-gjtls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie99c059d2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:43.991 [INFO][5735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:43.991 [INFO][5735] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" iface="eth0" netns="" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:43.991 [INFO][5735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:43.991 [INFO][5735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.026 [INFO][5742] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.026 [INFO][5742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.027 [INFO][5742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.036 [WARNING][5742] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.036 [INFO][5742] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.039 [INFO][5742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:44:44.043437 containerd[1496]: 2026-01-20 01:44:44.041 [INFO][5735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.044754 containerd[1496]: time="2026-01-20T01:44:44.043481879Z" level=info msg="TearDown network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" successfully" Jan 20 01:44:44.044754 containerd[1496]: time="2026-01-20T01:44:44.043522576Z" level=info msg="StopPodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" returns successfully" Jan 20 01:44:44.045600 containerd[1496]: time="2026-01-20T01:44:44.045094926Z" level=info msg="RemovePodSandbox for \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" Jan 20 01:44:44.045600 containerd[1496]: time="2026-01-20T01:44:44.045142740Z" level=info msg="Forcibly stopping sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\"" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.107 [WARNING][5756] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"94aa1e8b-d364-40d2-9c05-39e890317a94", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"33658bb022a7830d87cc0774f8773d764c061be547b9068f987ccb77514245b0", Pod:"coredns-668d6bf9bc-gjtls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie99c059d2b3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.107 [INFO][5756] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.107 [INFO][5756] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" iface="eth0" netns="" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.107 [INFO][5756] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.108 [INFO][5756] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.138 [INFO][5763] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.139 [INFO][5763] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.139 [INFO][5763] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.148 [WARNING][5763] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.148 [INFO][5763] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" HandleID="k8s-pod-network.846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Workload="srv--vpmg3.gb1.brightbox.com-k8s-coredns--668d6bf9bc--gjtls-eth0" Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.150 [INFO][5763] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:44:44.154198 containerd[1496]: 2026-01-20 01:44:44.152 [INFO][5756] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e" Jan 20 01:44:44.155082 containerd[1496]: time="2026-01-20T01:44:44.154226774Z" level=info msg="TearDown network for sandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" successfully" Jan 20 01:44:44.158251 containerd[1496]: time="2026-01-20T01:44:44.158189395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:44:44.158424 containerd[1496]: time="2026-01-20T01:44:44.158384078Z" level=info msg="RemovePodSandbox \"846846bc39d6aa1912a6c0c4b76b1d1672ee3735a0de8ec0689b88a68e1c8b1e\" returns successfully" Jan 20 01:44:44.159439 containerd[1496]: time="2026-01-20T01:44:44.159030202Z" level=info msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.220 [WARNING][5777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"63686bdb-630e-4c31-bb10-61a7b178bd09", ResourceVersion:"1458", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160", Pod:"calico-apiserver-799b8f498b-5jdcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29ef186aa54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.220 [INFO][5777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.220 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" iface="eth0" netns="" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.220 [INFO][5777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.220 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.254 [INFO][5785] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.255 [INFO][5785] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.255 [INFO][5785] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.264 [WARNING][5785] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.264 [INFO][5785] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.266 [INFO][5785] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:44:44.271227 containerd[1496]: 2026-01-20 01:44:44.268 [INFO][5777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.271227 containerd[1496]: time="2026-01-20T01:44:44.271198463Z" level=info msg="TearDown network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" successfully" Jan 20 01:44:44.274077 containerd[1496]: time="2026-01-20T01:44:44.271259069Z" level=info msg="StopPodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" returns successfully" Jan 20 01:44:44.274077 containerd[1496]: time="2026-01-20T01:44:44.273884491Z" level=info msg="RemovePodSandbox for \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" Jan 20 01:44:44.274077 containerd[1496]: time="2026-01-20T01:44:44.273928510Z" level=info msg="Forcibly stopping sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\"" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.333 [WARNING][5799] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0", GenerateName:"calico-apiserver-799b8f498b-", Namespace:"calico-apiserver", SelfLink:"", UID:"63686bdb-630e-4c31-bb10-61a7b178bd09", ResourceVersion:"1458", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 1, 42, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799b8f498b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-vpmg3.gb1.brightbox.com", ContainerID:"3f2a1e4bc1b59284c7f087f9ab231b87c60bc826458b42a6e3468e78109f3160", Pod:"calico-apiserver-799b8f498b-5jdcb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29ef186aa54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.334 [INFO][5799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.334 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" iface="eth0" netns="" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.334 [INFO][5799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.334 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.367 [INFO][5806] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.367 [INFO][5806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.367 [INFO][5806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.376 [WARNING][5806] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.376 [INFO][5806] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" HandleID="k8s-pod-network.0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Workload="srv--vpmg3.gb1.brightbox.com-k8s-calico--apiserver--799b8f498b--5jdcb-eth0" Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.378 [INFO][5806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 01:44:44.382876 containerd[1496]: 2026-01-20 01:44:44.380 [INFO][5799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06" Jan 20 01:44:44.382876 containerd[1496]: time="2026-01-20T01:44:44.382466058Z" level=info msg="TearDown network for sandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" successfully" Jan 20 01:44:44.386423 containerd[1496]: time="2026-01-20T01:44:44.386376997Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 20 01:44:44.386515 containerd[1496]: time="2026-01-20T01:44:44.386441525Z" level=info msg="RemovePodSandbox \"0ac636722e1cac1097021a7498cd220562e4e7f14b2022294a634b06f3b8bb06\" returns successfully" Jan 20 01:44:45.151241 systemd[1]: Started sshd@22-10.230.30.54:22-20.161.92.111:45208.service - OpenSSH per-connection server daemon (20.161.92.111:45208). Jan 20 01:44:45.678583 kubelet[2689]: E0120 01:44:45.677241 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:44:45.679556 kubelet[2689]: E0120 01:44:45.677483 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:44:45.744732 sshd[5813]: Accepted publickey for core from 20.161.92.111 port 45208 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:45.747400 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:45.756100 systemd-logind[1487]: New session 20 of user core. Jan 20 01:44:45.764153 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 01:44:46.259163 systemd[1]: Started sshd@23-10.230.30.54:22-164.92.217.44:58338.service - OpenSSH per-connection server daemon (164.92.217.44:58338). Jan 20 01:44:46.280197 sshd[5813]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:46.285555 systemd[1]: sshd@22-10.230.30.54:22-20.161.92.111:45208.service: Deactivated successfully. Jan 20 01:44:46.288645 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 01:44:46.290790 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Jan 20 01:44:46.292806 systemd-logind[1487]: Removed session 20. Jan 20 01:44:46.361456 sshd[5824]: Invalid user oracle from 164.92.217.44 port 58338 Jan 20 01:44:46.378630 sshd[5824]: Connection closed by invalid user oracle 164.92.217.44 port 58338 [preauth] Jan 20 01:44:46.381509 systemd[1]: sshd@23-10.230.30.54:22-164.92.217.44:58338.service: Deactivated successfully. Jan 20 01:44:48.676014 kubelet[2689]: E0120 01:44:48.674697 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:44:50.674739 kubelet[2689]: E0120 01:44:50.674591 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:44:51.386256 systemd[1]: Started sshd@24-10.230.30.54:22-20.161.92.111:45218.service - OpenSSH per-connection server daemon (20.161.92.111:45218). Jan 20 01:44:51.681224 kubelet[2689]: E0120 01:44:51.680798 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:44:51.958975 sshd[5833]: Accepted publickey for core from 20.161.92.111 port 45218 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:51.961535 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:51.972193 systemd-logind[1487]: New session 21 of user core. Jan 20 01:44:51.980076 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 01:44:52.483023 sshd[5833]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:52.487295 systemd[1]: sshd@24-10.230.30.54:22-20.161.92.111:45218.service: Deactivated successfully. Jan 20 01:44:52.490046 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 01:44:52.491913 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Jan 20 01:44:52.493726 systemd-logind[1487]: Removed session 21. Jan 20 01:44:52.587217 systemd[1]: Started sshd@25-10.230.30.54:22-20.161.92.111:49808.service - OpenSSH per-connection server daemon (20.161.92.111:49808). Jan 20 01:44:52.676386 kubelet[2689]: E0120 01:44:52.676237 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:44:53.169559 sshd[5846]: Accepted publickey for core from 20.161.92.111 port 49808 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:53.172504 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:53.181008 systemd-logind[1487]: New session 22 of user core. Jan 20 01:44:53.187145 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 01:44:53.966011 sshd[5846]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:53.976842 systemd[1]: sshd@25-10.230.30.54:22-20.161.92.111:49808.service: Deactivated successfully. Jan 20 01:44:53.979248 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 01:44:53.980380 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Jan 20 01:44:53.982450 systemd-logind[1487]: Removed session 22. Jan 20 01:44:54.069315 systemd[1]: Started sshd@26-10.230.30.54:22-20.161.92.111:49818.service - OpenSSH per-connection server daemon (20.161.92.111:49818). Jan 20 01:44:54.657269 sshd[5859]: Accepted publickey for core from 20.161.92.111 port 49818 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:54.660658 sshd[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:54.669282 systemd-logind[1487]: New session 23 of user core. Jan 20 01:44:54.675064 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 01:44:56.030723 sshd[5859]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:56.041590 systemd[1]: sshd@26-10.230.30.54:22-20.161.92.111:49818.service: Deactivated successfully. Jan 20 01:44:56.046732 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 01:44:56.048167 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Jan 20 01:44:56.050374 systemd-logind[1487]: Removed session 23. Jan 20 01:44:56.134365 systemd[1]: Started sshd@27-10.230.30.54:22-20.161.92.111:49822.service - OpenSSH per-connection server daemon (20.161.92.111:49822). Jan 20 01:44:56.725910 sshd[5877]: Accepted publickey for core from 20.161.92.111 port 49822 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:56.729657 sshd[5877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:56.737277 systemd-logind[1487]: New session 24 of user core. Jan 20 01:44:56.748213 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 01:44:57.572210 sshd[5877]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:57.579559 systemd[1]: sshd@27-10.230.30.54:22-20.161.92.111:49822.service: Deactivated successfully. Jan 20 01:44:57.583262 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 01:44:57.585305 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Jan 20 01:44:57.586799 systemd-logind[1487]: Removed session 24. Jan 20 01:44:57.677212 systemd[1]: Started sshd@28-10.230.30.54:22-20.161.92.111:49824.service - OpenSSH per-connection server daemon (20.161.92.111:49824). Jan 20 01:44:57.686087 kubelet[2689]: E0120 01:44:57.683317 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:44:58.280058 sshd[5889]: Accepted publickey for core from 20.161.92.111 port 49824 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:44:58.281961 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:44:58.289663 systemd-logind[1487]: New session 25 of user core. Jan 20 01:44:58.297063 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 20 01:44:58.677913 kubelet[2689]: E0120 01:44:58.676469 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:44:58.678424 containerd[1496]: time="2026-01-20T01:44:58.676590668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 01:44:58.813189 sshd[5889]: pam_unix(sshd:session): session closed for user core Jan 20 01:44:58.821600 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Jan 20 01:44:58.823937 systemd[1]: sshd@28-10.230.30.54:22-20.161.92.111:49824.service: Deactivated successfully. Jan 20 01:44:58.827120 systemd[1]: session-25.scope: Deactivated successfully. Jan 20 01:44:58.828669 systemd-logind[1487]: Removed session 25. Jan 20 01:44:59.001750 containerd[1496]: time="2026-01-20T01:44:59.001405389Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:44:59.003230 containerd[1496]: time="2026-01-20T01:44:59.003101936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 01:44:59.003438 containerd[1496]: time="2026-01-20T01:44:59.003346848Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 20 01:44:59.004059 kubelet[2689]: E0120 01:44:59.003945 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:59.004729 kubelet[2689]: E0120 01:44:59.004116 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 01:44:59.004729 kubelet[2689]: E0120 01:44:59.004457 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ttbj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-849c94fcc7-89lqr_calico-system(eedef20c-6169-4097-90af-4b5ed35e4c70): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 01:44:59.006367 kubelet[2689]: E0120 01:44:59.006279 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:45:02.674884 kubelet[2689]: E0120 01:45:02.673768 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:45:03.677387 containerd[1496]: time="2026-01-20T01:45:03.677053478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 01:45:03.925304 systemd[1]: Started sshd@29-10.230.30.54:22-20.161.92.111:50308.service - OpenSSH per-connection server daemon (20.161.92.111:50308). Jan 20 01:45:04.142452 containerd[1496]: time="2026-01-20T01:45:04.142323497Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:04.143988 containerd[1496]: time="2026-01-20T01:45:04.143924847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 01:45:04.144147 containerd[1496]: time="2026-01-20T01:45:04.144075960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 20 01:45:04.144476 kubelet[2689]: E0120 01:45:04.144382 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:45:04.145897 kubelet[2689]: E0120 01:45:04.144501 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 01:45:04.145897 kubelet[2689]: E0120 01:45:04.145073 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5a4b17c258084135abe35c802ee47f41,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:04.146712 containerd[1496]: time="2026-01-20T01:45:04.144987356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 01:45:04.454996 containerd[1496]: time="2026-01-20T01:45:04.454180933Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:04.455727 containerd[1496]: time="2026-01-20T01:45:04.455644348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 01:45:04.455870 containerd[1496]: time="2026-01-20T01:45:04.455772135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 20 01:45:04.456225 kubelet[2689]: E0120 01:45:04.456137 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:45:04.456325 kubelet[2689]: E0120 01:45:04.456253 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 01:45:04.457136 kubelet[2689]: E0120 01:45:04.456726 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:04.457348 containerd[1496]: time="2026-01-20T01:45:04.456766929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 01:45:04.526060 sshd[5908]: Accepted publickey for core from 20.161.92.111 port 50308 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:45:04.528360 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:04.540905 systemd-logind[1487]: New session 26 of user core. Jan 20 01:45:04.546047 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 20 01:45:04.760584 containerd[1496]: time="2026-01-20T01:45:04.760065277Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:04.761692 containerd[1496]: time="2026-01-20T01:45:04.761345660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 01:45:04.761692 containerd[1496]: time="2026-01-20T01:45:04.761419784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 20 01:45:04.762375 kubelet[2689]: E0120 01:45:04.762295 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:45:04.762513 kubelet[2689]: E0120 01:45:04.762464 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 01:45:04.764322 containerd[1496]: time="2026-01-20T01:45:04.764087037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 01:45:04.779711 kubelet[2689]: E0120 01:45:04.779580 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzvnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6df6c9ff7-pskf4_calico-system(dd0de801-e3e8-44b8-afed-383a8eb729ca): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:04.782099 kubelet[2689]: E0120 01:45:04.782036 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:45:05.085200 containerd[1496]: time="2026-01-20T01:45:05.084753740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:05.086641 containerd[1496]: time="2026-01-20T01:45:05.086398213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 20 01:45:05.086641 containerd[1496]: time="2026-01-20T01:45:05.086508573Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 01:45:05.088047 kubelet[2689]: E0120 01:45:05.087061 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:45:05.088047 kubelet[2689]: E0120 01:45:05.087169 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 01:45:05.088047 kubelet[2689]: E0120 01:45:05.087387 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5cnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w59jj_calico-system(c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:05.088855 kubelet[2689]: E0120 01:45:05.088760 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:45:05.289446 sshd[5908]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:05.296659 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Jan 20 01:45:05.298545 systemd[1]: sshd@29-10.230.30.54:22-20.161.92.111:50308.service: Deactivated successfully. Jan 20 01:45:05.304309 systemd[1]: session-26.scope: Deactivated successfully. Jan 20 01:45:05.309269 systemd-logind[1487]: Removed session 26. Jan 20 01:45:05.676590 kubelet[2689]: E0120 01:45:05.676523 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75" Jan 20 01:45:08.674367 containerd[1496]: time="2026-01-20T01:45:08.674225559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:45:08.993689 containerd[1496]: time="2026-01-20T01:45:08.992498148Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:08.996569 containerd[1496]: time="2026-01-20T01:45:08.995857996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:45:08.996569 containerd[1496]: time="2026-01-20T01:45:08.995872131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:45:08.996704 kubelet[2689]: E0120 01:45:08.996467 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:45:08.996704 kubelet[2689]: E0120 01:45:08.996576 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:45:08.997413 kubelet[2689]: E0120 01:45:08.996814 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44szk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-fhvkc_calico-apiserver(573ad695-5762-4b18-9450-3954cd6448a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:08.998536 kubelet[2689]: E0120 01:45:08.998464 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-fhvkc" podUID="573ad695-5762-4b18-9450-3954cd6448a6" Jan 20 01:45:10.404365 systemd[1]: Started sshd@30-10.230.30.54:22-20.161.92.111:50320.service - OpenSSH per-connection server daemon (20.161.92.111:50320). Jan 20 01:45:10.676067 kubelet[2689]: E0120 01:45:10.675678 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-849c94fcc7-89lqr" podUID="eedef20c-6169-4097-90af-4b5ed35e4c70" Jan 20 01:45:11.001086 sshd[5946]: Accepted publickey for core from 20.161.92.111 port 50320 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:45:11.004644 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:11.019952 systemd-logind[1487]: New session 27 of user core. Jan 20 01:45:11.028201 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 20 01:45:11.676662 containerd[1496]: time="2026-01-20T01:45:11.675316108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 01:45:11.747961 sshd[5946]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:11.756079 systemd[1]: sshd@30-10.230.30.54:22-20.161.92.111:50320.service: Deactivated successfully. Jan 20 01:45:11.761574 systemd[1]: session-27.scope: Deactivated successfully. Jan 20 01:45:11.764611 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Jan 20 01:45:11.768228 systemd-logind[1487]: Removed session 27. Jan 20 01:45:11.988461 containerd[1496]: time="2026-01-20T01:45:11.987936997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:11.989654 containerd[1496]: time="2026-01-20T01:45:11.989574139Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 01:45:11.989967 containerd[1496]: time="2026-01-20T01:45:11.989697132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 20 01:45:11.990876 kubelet[2689]: E0120 01:45:11.990413 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:45:11.990876 kubelet[2689]: E0120 01:45:11.990522 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 01:45:11.993095 kubelet[2689]: E0120 01:45:11.992470 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2zx2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kt727_calico-system(7f445973-85d0-4221-8af9-3dc0c3aa4878): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:11.994327 kubelet[2689]: E0120 01:45:11.994217 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kt727" podUID="7f445973-85d0-4221-8af9-3dc0c3aa4878" Jan 20 01:45:16.856997 systemd[1]: Started sshd@31-10.230.30.54:22-20.161.92.111:56630.service - OpenSSH per-connection server daemon (20.161.92.111:56630). Jan 20 01:45:17.483497 sshd[5980]: Accepted publickey for core from 20.161.92.111 port 56630 ssh2: RSA SHA256:vwYWaPWY4E+jnxDgal8jMKtiDg2o3eyknQaSavuqdLU Jan 20 01:45:17.486962 sshd[5980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 01:45:17.495875 systemd-logind[1487]: New session 28 of user core. Jan 20 01:45:17.505075 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 20 01:45:17.681919 containerd[1496]: time="2026-01-20T01:45:17.680667859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:45:18.048328 containerd[1496]: time="2026-01-20T01:45:18.047884885Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:18.050926 containerd[1496]: time="2026-01-20T01:45:18.050638662Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:45:18.051024 containerd[1496]: time="2026-01-20T01:45:18.050879298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:45:18.051489 kubelet[2689]: E0120 01:45:18.051392 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:45:18.053394 kubelet[2689]: E0120 01:45:18.051523 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:45:18.067674 kubelet[2689]: E0120 01:45:18.067586 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5jd9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799b8f498b-5jdcb_calico-apiserver(63686bdb-630e-4c31-bb10-61a7b178bd09): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:18.069764 kubelet[2689]: E0120 01:45:18.069695 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799b8f498b-5jdcb" podUID="63686bdb-630e-4c31-bb10-61a7b178bd09" Jan 20 01:45:18.385595 sshd[5980]: pam_unix(sshd:session): session closed for user core Jan 20 01:45:18.392645 systemd[1]: sshd@31-10.230.30.54:22-20.161.92.111:56630.service: Deactivated successfully. Jan 20 01:45:18.401185 systemd[1]: session-28.scope: Deactivated successfully. Jan 20 01:45:18.406344 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Jan 20 01:45:18.410814 systemd-logind[1487]: Removed session 28. Jan 20 01:45:19.637305 systemd[1]: Started sshd@32-10.230.30.54:22-164.92.217.44:32872.service - OpenSSH per-connection server daemon (164.92.217.44:32872). Jan 20 01:45:19.687417 kubelet[2689]: E0120 01:45:19.687335 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6df6c9ff7-pskf4" podUID="dd0de801-e3e8-44b8-afed-383a8eb729ca" Jan 20 01:45:19.762257 sshd[5995]: Invalid user oracle from 164.92.217.44 port 32872 Jan 20 01:45:19.821044 sshd[5995]: Connection closed by invalid user oracle 164.92.217.44 port 32872 [preauth] Jan 20 01:45:19.825422 systemd[1]: sshd@32-10.230.30.54:22-164.92.217.44:32872.service: Deactivated successfully. Jan 20 01:45:20.676043 containerd[1496]: time="2026-01-20T01:45:20.675580077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 01:45:20.678160 kubelet[2689]: E0120 01:45:20.677971 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w59jj" podUID="c6594f9f-80a7-4dbf-a4b4-1d2817fc3bbd" Jan 20 01:45:20.993118 containerd[1496]: time="2026-01-20T01:45:20.992647953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 20 01:45:20.994400 containerd[1496]: time="2026-01-20T01:45:20.994206441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 01:45:20.994400 containerd[1496]: time="2026-01-20T01:45:20.994236637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 20 01:45:20.996185 kubelet[2689]: E0120 01:45:20.996076 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:45:20.998034 kubelet[2689]: E0120 01:45:20.996211 2689 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 01:45:20.999473 kubelet[2689]: E0120 01:45:20.999381 2689 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h49ps,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66bfff8c98-mt7kn_calico-apiserver(5bb26b29-89e1-4055-a3dd-e9f6156c0d75): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 01:45:21.000927 kubelet[2689]: E0120 01:45:21.000860 2689 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66bfff8c98-mt7kn" podUID="5bb26b29-89e1-4055-a3dd-e9f6156c0d75"